Problem 3.2. Let {v1; : : : ; vn} be a basis for V over R. 1. Explain why there
ID: 1948411 • Letter: P
Question
Problem 3.2. Let {v1; : : : ; vn} be a basis for V over R.1. Explain why there must exist n linear functions {f1; : : : ; fn} such that, for each i =
1; : : : ; n, fi(vi) = 1 and fi(vj) = 0 for j = i.
2. Prove that the linear functions {f1; : : : ; fn} defined in the previous part form a basis for
V * over R. This basis is called the standard dual basis to fv1; : : : ; vng. Note: There are
two things you must prove. First show that the set {f1; : : : ; fn} is linearly independent;
second, show that any linear function f ? V* can be written as a linear combination
of the linear functions f1; : : : ; fn. (Hint: To show these conditions, consider the values
of linear functions on the basis {v1; : : : ; vn} for V).
Explanation / Answer
BASES, LINEAR INDEPENDENCE 667 a(+ ax+.-- +anix'1 -xn +an+lxnl +...+ +a1xm" is the zero polynomial, so that all coefficients must be 0; but one coefficient is -1, and we have a contradiction. In Section 4-11 we saw that the set J of proper rational functions with given denominator g(x) was a vector space. If g(x) = (x - 1)2(x - 2), we saw that { 1g, x/ g, x2/g, x3 g, x4'/g} and {1 (x- 1), 1 (x- 1),1(x- 2), 1/(x- 2)2,1/(x- 2)} were each a basis for. A finite set I(uI,,..., uk of vectors of the vector space V is said to be linearly d pendent, if there exist scalars a1,..., ak, not all 0, such that 0=aluI +... +akuk If one of the u;, say u1, is the zero vector 0, the set u1.,.., uk must be linearly dependent, since 0 =O 1 0 + 0 - u2 +. ~. + 0 ~ uk. If one of the ul is a linear combination of the others, then the set {uI,..., uk } is linearly dependent. For if, for example, U1 a2u2+.. -.- - + akUk then 0 = (-1)u + au2 +.-- -+ akUk, and here the coefficient of uI1 is not 0. The converse is also true (Problem 7 below). Thus we can state: the set 1u..., uk } is linearly dpendent if, and only if, one of the vectors in the set is a linear combination of the others. We can also speak of linearly dependent infinite sets. In general, a set A of a vector space is linearly dependent exactly when one vector in the set is a linear combination of a finite number of the other vectors. A subset of a vector space is said to be linearly independent if it is not linearly dependent. Thus a set { u,...., uIk } is linearly independent if, whenever 0 is expressed as a linear combination of uI,..., u, all the coefficients are 0. We saw in Chapter 1 that {i, j} is a linearly independent set in the space of vectors in the plane, whereas any set of three vectors from this space is linearly dependent. The infinite set 1, x,...., x,...) is a linearly independent set in the vector space (P of all polynomials, since a01 + a1x +.. + am,,, x'M is the zero polynomial exactly when all aI are 0. It should be noted that the linear dependence or linear independence of a subset depends on the scalars used. For example: when the set C of complex numbers is considered to be a complex vector space, any two elements of C are linearly dependent (for complex scalars). However, C can be considered to be a real vector space (Problem 13 below), and in this space the set (1, i} is a linearly independent set for real scalars). Usually, it is clear from the context whether we are allowing the scalars to be complex numbers or only real numbers. However, if confusion is possible, we shall speak of "linear independence over the reals" and of "linear independence over the complex nmrmbers.
Related Questions
drjack9650@gmail.com
Navigate
Integrity-first tutoring: explanations and feedback only — we do not complete graded work. Learn more.