Vector Spaces
Introduction: The exercise below is the "Golden Rule" of linear transformation. It tells us that we don't need to know where a function sends every point in a space , we only need to know where it sends the basic vectors. The rest of the space is forced to follow by the laws of linearity. In this vector space exercise we will determine the Linear Transformation by a basis. Up to now we have focused on the "bones" of the vector space (the basis). Now we look at the "muscle" - the Linear Transformation. A linear transformation `phi` is a special type of function that "respects" the structure of the space. Because every vector in `bbb"V"` is a unique combination of basis vectors , a linear transformation is entirely "locked in" once we decide where basis vectors land in the target space `bbb"W"`.
Exercise: Let `bbb"V"` and `bbb"W"` be vector spaces over the same field `bbb"F"`. A function `phi : bbb"V" rarr bbb"W"` is a linear transformation of `bbb"V"` into `bbb"W"` if the following conditions are satisfied for all `alpha , beta in bbb"V"` and `a in bbb"F"`:
1. `phi(alpha + beta) = phi(alpha) + phi(beta)` (additivity)
2. `phi(a alpha) = a(phi(alpha))` (homogeneity)
(a) If `{beta_i : i in I}` is a basis for `bbb"V"` over `bbb"F"` , show that a linear transformation `phi: bbb"V" rarr bbb"W"` is completely determined by the vectors `phi(beta_i) in W`.
(b) If `{beta_i : i in I}` is a basis for `bbb"V"`, and `{w_i : i in I}` in `bbb"W"` is any set of vectors , not necessarily distinct , show that there exists exactly one linear transformation `phi: bbb"V" rarr bbb"W"` such that `phi(beta_i) = w_i`.
Solution: A linear map is completely fixed by its values on a basis , and given arbitrary target vectors for the basis there is exactly one linear map realizing those values.
Part (a): "Determined by the basis values".
Let `{beta_i : i in I}` be a basis for `bbb"V"` and let `phi: bbb"V" rarr bbb"W"` be linear.
1. Every vector in `bbb"V"` is a unique linear combination of the basis: `v = sum_{i in I} a_i beta_i`.
2. Use linearity to compute `phi(v)`:
`phi(v) = phi(sum_i a_i beta_i) = sum_i a_i phi(beta_i)`. This uses additivity and homgeneity repeatedly.
3. The right hand side involves only the scalars `a_i` , which are uniquely determined by `v` , and the vectors `phi(beta_i) in bbb"W"`. Hence , once all `phi(beta_i)` are known , `phi(v)` is forced for every `v in bbb"V"` , there is no freedom left. That is precisely what "`phi` is completely determined by the vectors `phi(beta_i)`" means.
Example for (a): `bbb"V" = bbb"W" = RR^2`.
Take the standard basis `e_1 = (1 , 0) , \ e_2 = (0 , 1)`. Suppose you are told a linear map `phi: RR^2 rarr RR^2` satisfies `phi(e_1) = (2 , 1) , \ phi(e_2) = (-1 , 3)`. Any vector `x = (x_1 , x_2)` has the unique representaion `x = x_1 e_1 + x_2 e_2`. By linearity , `phi(x) = phi(x_1 e_1 + x_2 e_2) = x_1 phi(e_1) + x_2 phi(e_2) = x_1 (2 , 1) + x_2 (-1 , 3) = (2x_1 - x_2 , x_1 + 3 x_2)`.
So once `phi(e_1)` and `phi(e_2)` are fixed , the formula for `bb{phi(x)}` for any `x` is automatic and unique.
Part (b): Existence and uniqueness.
Let `{beta_i : i in I}` be a basis of `bbb"V"` and let `{w_i : in I} subseteq bbb"W"` be any family of vectors (no assumptions such as linear independence). We must show:
- There exists a linear `phi : bbb"V" rarr bbb"W"` with `phi(beta_i) = w_i` for all `i`.
- This `phi` is unique.
This is a standard theorem about linear maps defined by bases.
`"Existence"`: Define first on basis vectors by
`phi(beta_i) := w_i` for all `i in I`. Now extend `phi` to all of `bbb"V"`:
1. Take any `v in bbb"V"`. Since `{beta_i}` is a basis , there is a unique finite combination `v = sum_{i in I} a_i beta_i`.
2. Define `phi(v) := sum_{i in I} a_i w_i` . Because the the representation of `x` as `sum a_i beta_i` is unique , this function is `"well-defined"` (one input doesen't give two different outputs).
We must check that this `phi` is `"linear"`:
- `"Additivity"`: If `u = sum_i a_i beta_i` and `v = sum_i b_i beta_i` , then `u + v = sum_i (a_i + b_i)beta_i`. Hence `phi(u + v) = sum_i (a_i + b_i)w_i = sum_i a_i w_i + sum_i b_i w_i = phi(u) + phi(v)`.
- `"Homogeneity"`: For `a in bbb"F"` , `au = sum_i (aa_i)beta_i implies phi(au) = sum_i(aa_i)w_i = a sum_i a_i w_i = a phi(u)`.
So the map defined this way is linear and satisfies `phi(beta_i) = w_i` by construction.
- `"Uniqueness"`: Suppose `phi , psi : bbb"V" rarr bbb"W"` are linear and both satisfy `phi(beta_i) = w_i = psi(beta_i)` for all `i`. Take any `v in bbb"V"` with representation `v = sum_i a_i beta_i`. Then `phi(v) = phi(sum_i a_i beta_i) = sum_i a_i phi(beta_i) = sum_i a_i w_i` and similarly `psi(v) = sum_i a_i psi(beta_i) = sum_i a_i w_i`. Hence `phi(v) = psi(v)` for every `v` , so `phi = psi`.
Thus the linear transformation with the prescribed values on the basis is unique.
Example for (b): `RR^2 rarr RR^3`.
Let `bbb"V" = RR^2` with basis `beta_1 = (1 , 0) , beta_2 = (0 , 1)` , and `bbb"W" = RR^3`. Pick any two vectors in `RR^3` , for instance `w_1 = (1 , 0 , 2) , w_2 = (3 , -1 , 4)`. By the theorem , there exists one linear `phi: RR^2 rarr RR^3` such that `phi(beta_1) = w_1 , phi(beta_2) = w_2`. For a general vector `x = (x_1 , x_2)` we write `x = x_1 beta_1 + x_2 beta_2` , and define `phi(x) = x_1 w_1 + x_2 w_2 = x_1 (1 , 0 , 2) + x_2 (3 , -1 , 4) = (x_1 + 3x_2 , -x_2 , 2x_1 + 4x_2)`.
- This `phi` is linear by construction (sum and scalar multiple are preserved).
- Any other linear map agreeing with `phi` on `beta_1 , beta_2` must give the same formula , so it must be the same map.
Example (with a non-standard basis): `RR^2 rarr RR^2`.
Let `bbb"V" = RR^2` but now use the basis `beta_1 = (1 , 1) , beta_2 = (1 , -1)`. Let `bbb"W" = RR^2` and prescribe `v_1 = (1 , 0) , v_2 = (0 , 2)`. There is exactly one linear map `phi` with `phi(beta_1) = (1 , 0) , phi(beta_2) = (0 , 2)`.
To compute `phi(x , y)`:
1. First express `(x , y)` in the basis `beta_1 , beta_2`. Solve `(x , y) = a(1 , 1) + b(1 , -1) = (a + b , a - b)`.
So , `a + b = x , a - b = y implies a = {x + y}/2 , b = {x - y}/2`.
2. Then `phi(x , y) = phi(a beta_1 + b beta_2) = a phi(beta_1) + b phi(beta_2) = a(1 , 0) + b(0 , 2) = ({x + y}/2 , 2* {x - y}/2)`.
Again there is no alternative linear map with these basis values. Any such map must use the same coefficients `a , b` and the same prescription on `beta_1 , beta_2` , hence must coincide with this `phi`.
QED
Result: A linear transformation is essentially a "lookup table" for the basis. Once the basis vectors have their target in `bbb"W"` , the rest of the transformation is set in stone.
END
Transition Note: `"The Lie Algebra Minute"`.
Note: This theorem is exactly what we used in our last Lie Algebra exercise! When we built the matrix for `ad_h` , we didn't calculate the bracket for every possible matrix in `sl(2 , RR)`. We only calculated what `ad_h` did to the basis `{e , f , h}`. Because of the theorem we just proved, these three results were enough to define the transformation for the entire `3D` algebra.
Question
The "Freedom" of Choice.
In part (b) , the exercise mentions the set of vectors `{w_i}` in `bbb"W"` does not have to be distinct and doesen't even have to be a basis. What does this imply about our ability to create linear transformations
?
A) We can map the basis vectors of `bbb"V"` to any vectors in `bbb"W"` we like , and a valid transformation will always exist.
B) We are restricted , we can only map basis vectors of `bbb"V"` to independent vectors in `bbb"W"` , otherwise the function fails the linearity test.