7.6 Vector Spaces

INTRODUCTION

In the preceding sections we were dealing with points and vectors in 2- and 3-space. Mathematicians in the nineteenth century, notably the English mathematicians Arthur Cayley (1821–1895) and James Joseph Sylvester (1814–1897) and the Irish mathematician William Rowan Hamilton (1805–1865), realized that the concepts of point and vector could be generalized. A realization developed that vectors could be described, or defined, by analytic rather than geometric properties. This was a truly significant breakthrough in the history of mathematics. There is no need to stop with three dimensions; ordered quadruples 〈a1, a2, a3, a4〉, quintuples 〈a1, a2, a3, a4, a5〉, and n-tuples 〈a1, a2, … , an〉 of real numbers can be thought of as vectors just as well as ordered pairs 〈a1, a2〉 and ordered triples 〈a1, a2, a3〉, the only difference being that we lose our ability to visualize directed line segments or arrows in 4-dimensional, 5-dimensional, or n-dimensional space.

n-Space

In formal terms, a vector in n-space is any ordered n-tuple a = 〈a1, a2, … , an〉 of real numbers called the components of a. The set of all vectors in n-space is denoted by Rn. The concepts of vector addition, scalar multiplication, equality, and so on listed in Definition 7.2.1 carry over to Rn in a natural way. For example, if a = 〈a1, a2, … , an〉 and b = 〈b1, b2, … , bn〉, then addition and scalar multiplication in n-space are defined by

a + b = 〈a1 + b1, a2 + b2, … , an + bn〉 and ka = 〈ka1, ka2, … , kan〉. (1)

The zero vector in Rn is 0 = 〈0, 0, … , 0〉. The notion of length or magnitude of a vector a = 〈a1, a2, … , an〉 in n-space is just an extension of that concept in 2- and 3-space:

.

The length of a vector is also called its norm. A unit vector is one whose norm is 1. For a nonzero vector a, the process of constructing a unit vector u by multiplying a by the reciprocal of its norm, that is, u = , is referred to as normalizing a. For example, if a = 〈3, 1, 2, −1〉, then and a unit vector is

The standard inner product, also known as the Euclidean inner product or dot product, of two n-vectors a = 〈a1, a2, … , an〉 and b = 〈b1, b2, … , bn〉 is the real number defined by

(2)

Two nonzero vectors a and b in Rn are said to be orthogonal if and only if a · b = 0. For example, a = 〈3, 4, 1, −6〉 and b = 〈1, , 1, 1〉 are orthogonal in R4 since a · b = 3 · 1 + 4 · + 1 · 1 + (−6) · 1 = 0.

Vector Space

We can even go beyond the notion of a vector as an ordered n-tuple in Rn. A vector can be defined as anything we want it to be: an ordered n-tuple, a number, an array of numbers, or even a function. But we are particularly interested in vectors that are elements in a special kind of set called a vector space. Fundamental to the notion of vector space are two kinds of objects, vectors and scalars, and two algebraic operations analogous to those given in (1). For a set of vectors we want to be able to add two vectors in this set and get another vector in the same set, and we want to multiply a vector by a scalar and obtain a vector in the same set. To determine whether a set of objects is a vector space depends on whether the set possesses these two algebraic operations along with certain other properties. These properties, the axioms of a vector space, are given next.

DEFINITION 7.6.1 Vector Space

Let V be a set of elements on which two operations called vector addition and scalar multiplication are defined. Then V is said to be a vector space if the following 10 properties are satisfied.

Axioms for Vector Addition:

(i) If x and y are in V, then x + y is in V.

(ii) For all x, y in V, x + y = y + x. ← commutative law

(iii) For all x, y, z in V, x + (y + z) = (x + y) + z. ← associative law

(iv) There is a unique vector 0 in V such that

0 + x = x + 0 = x. ← zero vector

(v) For each x in V, there exists a vector −x such that

x + (−x) = (−x) + x = 0. ← negative of a vector

Axioms for Scalar Multiplication:

(vi) If k is any scalar and x is in V, then kx is in V.

(vii) k(x + y) = kx + ky ← distributive law

(viii) (k1 + k2)x = k1x + k2x ← distributive law

(ix) k1(k2x) = (k1k2)x

(x) 1x = x

In this brief introduction to abstract vectors we shall take the scalars in Definition 7.6.1 to be real numbers. In this case V is referred to as a real vector space, although we shall not belabor this term. When the scalars are allowed to be complex numbers we obtain a complex vector space. Since properties (i)–(viii) on page 331 are the prototypes for the axioms in Definition 7.6.1, it is clear that R2 is a vector space. Moreover, since vectors in R3 and Rn have these same properties, we conclude that R3 and Rn are also vector spaces. Axioms (i) and (vi) are called the closure axioms and we say that a vector space V is closed under vector addition and scalar multiplication. Note, too, that concepts such as length and inner product are not part of the axiomatic structure of a vector space.

EXAMPLE 1 Checking the Closure Axioms

Determine whether the sets (a) V = {1} and (b) V = {0} under ordinary addition and multiplication by real numbers are vector spaces.

SOLUTION

(a) For this system consisting of one element, many of the axioms given in Definition 7.6.1 are violated. In particular, axioms (i) and (vi) of closure are not satisfied. Neither the sum 1 + 1 = 2 nor the scalar multiple k · 1 = k, for k ≠ 1, is in V. Hence V is not a vector space.

(b) In this case the closure axioms are satisfied since 0 + 0 = 0 and k · 0 = 0 for any real number k. The commutative and associative axioms are satisfied since 0 + 0 = 0 + 0 and 0 + (0 + 0) = (0 + 0) + 0. In this manner it is easy to verify that the remaining axioms are also satisfied. Hence V is a vector space.

The vector space V = {0} is often called the trivial or zero vector space.

If this is your first experience with the notion of an abstract vector, then you are cautioned to not take the names vector addition and scalar multiplication too literally. These operations are defined and as such you must accept them at face value even though these operations may not bear any resemblance to the usual understanding of ordinary addition and multiplication in, say, R, R2, R3, or Rn. For example, the addition of two vectors x and y could be xy. With this forewarning, consider the next example.

EXAMPLE 2 An Example of a Vector Space

Consider the set V of positive real numbers. If x and y denote positive real numbers, then we write vectors in V as x = x and y = y. Now, addition of vectors is defined by

x + y = xy,

and scalar multiplication is defined by

kx = xk.

Determine whether V is a vector space.

SOLUTION

We shall go through all 10 axioms in Definition 7.6.1.

  1. For x = x > 0 and y = y > 0, x + y = xy > 0. Thus, the sum x + y is in V; V is closed under addition.
  2. Since multiplication of positive real numbers is commutative, we have for all x = x and y = y in V, x + y = xy = yx = y + x. Thus, addition is commutative.
  3. For all x = x, y = y, z = z in V,

    x + (y + z) = x(yz) = (xy)z = (x + y) + z.


    Thus, addition is associative.
  4. Since 1 + x = 1x = x = x and x + 1 = x1 = x = x, the zero vector 0 is 1 = 1.
  5. If we define −x = , then

    x + (−x) = x = 1 = 1 = 0 and (−x) + x = x = 1 = 1 = 0.

    Therefore, the negative of a vector is its reciprocal.

  6. If k is any scalar and x = x > 0 is any vector, then kx = xk > 0. Hence V is closed under scalar multiplication.
  7. If k is any scalar, then

    k(x + y) = (xy)k = xkyk = kx + ky.

  8. For scalars k1 and k2,

    (k1 + k2)x = = = k1x + k2x.

  9. For scalars k1 and k2,

    k1(k2x) = = = (k1k2)x.

  10. 1x = x1 = x = x.

Since all the axioms of Definition 7.6.1 are satisfied, we conclude that V is a vector space.

Here are some important vector spaces—we have mentioned some of these previously. The operations of vector addition and scalar multiplication are the usual operations associated with the set.

  • The set R of real numbers
  • The set R2 of ordered pairs
  • The set R3 of ordered triples
  • The set Rn of ordered n-tuples
  • The set Pn of polynomials of degree less than or equal to n
  • The set P of all polynomials
  • The set of real-valued functions f defined on the entire real line
  • The set C[a, b] of real-valued functions f continuous on the closed interval [a, b]
  • The set C(−, ) of real-valued functions f continuous on the entire real line
  • The set Cn[a, b] of all real-valued functions f for which f, f′ ,f″ , …, f(n) exist and are continuous on the closed interval [a, b]

Subspace

It may happen that a subset of vectors W of a vector space V is itself a vector space.

DEFINITION 7.6.2 Subspace

If a subset W of a vector space V is itself a vector space under the operations of vector addition and scalar multiplication defined on V, then W is called a subspace of V.

Every vector space V has at least two subspaces: V itself and the zero subspace {0}; {0} is a subspace since the zero vector must be an element in every vector space.

To show that a subset W of a vector space V is a subspace, it is not necessary to demonstrate that all 10 axioms of Definition 7.6.1 are satisfied. Since all the vectors in W are also in V, these vectors must satisfy axioms such as (ii) and (iii). In other words, W inherits most of the properties of a vector space from V. As the next theorem indicates, we need only check the two closure axioms to demonstrate that a subset W is a subspace of V.

THEOREM 7.6.1 Criteria for a Subspace

A nonempty subset W of a vector space V is a subspace of V if and only if W is closed under vector addition and scalar multiplication defined on V:

  1. If x and y are in W, then x + y is in W.
  2. If x is in W and k is any scalar, then kx is in W.

EXAMPLE 3 A Subspace

Suppose f and g are continuous real-valued functions defined on the entire real line. Then we know from calculus that f + g and kf, for any real number k, are continuous and real-valued functions. From this we can conclude that C(−, ) is a subspace of the vector space of real-valued functions defined on the entire real line.

EXAMPLE 4 A Subspace

The set Pn of polynomials of degree less than or equal to n is a subspace of C(−, ), the set of real-valued functions continuous on the entire real line.

It is always a good idea to have concrete visualizations of vector spaces and subspaces. The subspaces of the vector space R3 of three-dimensional vectors can be easily visualized by thinking of a vector as a point (a1, a2, a3). Of course, {0} and R3 itself are subspaces; other subspaces are all lines passing through the origin, and all planes passing through the origin. The lines and planes must pass through the origin since the zero vector 0 = (0, 0, 0) must be an element in each subspace.

Similar to Definition 3.1.1 we can define linearly independent vectors.

DEFINITION 7.6.3 Linear Independence

A set of vectors {x1, x2, …, xn} is said to be linearly independent if the only constants satisfying the equation

k1x1 + k2x2 + … + knxn = 0 (3)

are k1 = k2 = … = kn = 0. If the set of vectors is not linearly independent, then it is said to be linearly dependent.

In R3, the vectors i = 〈1, 0, 0〉, j = 〈0, 1, 0〉, and k = 〈0, 0, 1〉 are linearly independent since the equation k1i + k2j + k3k = 0 is the same as

k1〈1, 0, 0〉 + k2〈0, 1, 0〉 + k3〈0, 0, 1〉 = 〈0, 0, 0〉 or 〈k1, k2, k3〉 = 〈0, 0, 0〉.

By equality of vectors, (iii) of Definition 7.2.1, we conclude that k1 = 0, k2 = 0, and k3 = 0. In Definition 7.6.3, linear dependence means that there are constants k1, k2, … , kn not all zero such that k1x1 + k2x2 + … + knxn = 0. For example, in R3 the vectors a = 〈1, 1, 1〉, b = 〈2, −1, 4〉, and c = 〈5, 2, 7〉 are linearly dependent since (3) is satisfied when k1 = 3, k2 = 1, and k3 = −1:

3〈1, 1, 1〉 + 〈2, −1, 4〉 − 〈5, 2, 7〉 = 〈0, 0, 0〉 or 3a + bc = 0.

We observe that two vectors are linearly independent if neither is a constant multiple of the other.

Basis

Any vector in R3 can be written as a linear combination of the linearly independent vectors i, j, and k. In Section 7.2, we said that these vectors form a basis for the system of three-dimensional vectors.

DEFINITION 7.6.4 Basis for a Vector Space

Consider a set of vectors B = {x1, x2, …, xn} in a vector space V. If the set B is linearly independent and if every vector in V can be expressed as a linear combination of these vectors, then B is said to be a basis for V.

Standard Bases

Although we cannot prove it in this course, every vector space has a basis. The vector space Pn of all polynomials of degree less than or equal to n has the basis {1, x, x2, … , xn} since any vector (polynomial) p(x) of degree n or less can be written as the linear combination p(x) = cnxn + … + c2x2 + c1x + c0. A vector space may have many bases. We mentioned previously the set of vectors {i, j, k} is a basis for R3. But it can be proved that {u1, u2, u3}, where

u1 = 〈1, 0, 0〉, u2 = 〈1, 1, 0〉, u3 = 〈1, 1, 1〉

is a linearly independent set (see Problem 23 in Exercises 7.6) and, furthermore, every vector a = 〈a1, a2, a3〉 can be expressed as a linear combination a = c1u1 + c2u2 + c3u3. Hence, the set of vectors {u1, u2, u3} is another basis for R3. Indeed, any set of three linearly independent vectors is a basis for that space. However, as mentioned in Section 7.2, the set {i, j, k} is referred to as the standard basis for R3. The standard basis for the space Pn is {1, x, x2, … , xn}. For the vector space Rn, the standard basis consists of the n vectors

e1 = 〈1, 0, 0, … , 0〉, e2 = 〈0, 1, 0, … , 0〉, … , en = 〈0, 0, 0, … , 1〉. (4)

If B is a basis for a vector space V, then for every vector v in V there exist scalars ci, i = 1, 2, …, n such that

v = c1x2 + c2x2 + … + cnxn. (5)

The scalars ci, i = 1, 2, … , n, in the linear combination (5) are called the coordinates of v relative to the basis B. In Rn, the n-tuple notation 〈a1, a2, … , an〉 for a vector a means that real numbers a1, a2, … , an are the coordinates of a relative to the standard basis with ei’s in the precise order given in (4).

Read the last sentence several times.

Dimension

If a vector space V has a basis B consisting of n vectors, then it can be proved that every basis for that space must contain n vectors. This leads to the next definition.

DEFINITION 7.6.5 Dimension of a Vector Space

The number of vectors in a basis B for a vector space V is said to be the dimension of the space.

EXAMPLE 5 Dimensions of Some Vector Spaces

  1. In agreement with our intuition, the dimensions of the vector spaces R, R2, R3, and Rn are, in turn, 1, 2, 3, and n.
  2. Since there are n + 1 vectors in the standard basis B = {1, x, x2, … , xn}, the dimension of the vector space Pn of polynomials of degree less than or equal to n is n + 1.
  3. The zero vector space {0} is given special consideration. This space contains only 0 and since {0} is a linearly dependent set, it is not a basis. In this case it is customary to take the empty set as the basis and to define the dimension of {0} as zero.

If the basis of a vector space V contains a finite number of vectors, then we say that the vector space is finite dimensional; otherwise it is infinite dimensional. The function space Cn(I) of n times continuously differentiable functions on an interval I is an example of an infinite-dimensional vector space.

Linear Differential Equations

Consider the homogeneous linear nth-order differential equation

(6)

on an interval I on which the coefficients are continuous and an(x) ≠ 0 for every x in the interval. A solution y1 of (6) is necessarily a vector in the vector space Cn(I). In addition, we know from the theory examined in Section 3.1 that if y1 and y2 are solutions of (6), then the sum y1 + y2 and any constant multiple ky1 are also solutions. Since the solution set is closed under addition and scalar multiplication, it follows from Theorem 7.6.1 that the solution set of (6) is a subspace of Cn(I). Hence the solution set of (6) deserves to be called the solution space of the differential equation. We also know that if {y1, y2, … , yn} is a linearly independent set of solutions of (6), then its general solution of the differential equation is the linear combination

y = c1y1(x) + c2y2(x) + … + cnyn(x).

Recall that any solution of the equation can be found from this general solution by specialization of the constants c1, c2, … , cn. Therefore, the linearly independent set of solutions {y1, y2, … , yn} is a basis for the solution space. The dimension of this solution space is n.

EXAMPLE 6 Dimension of a Solution Space

The general solution of the homogeneous linear second-order differential equation y″ + 25y = 0 is y = c1 cos 5x + c2 sin 5x. A basis for the solution space consists of the linearly independent vectors {cos 5x, sin 5x}. The solution space is two-dimensional.

The set of solutions of a nonhomogeneous linear differential equation is not a vector space. Several axioms of a vector space are not satisfied; most notably the set of solutions does not contain a zero vector. In other words, y = 0 is not a solution of a nonhomogeneous linear differential equation.

Span

If S denotes any set of vectors {x1, x2, … , xn} in a vector space V, then the set of all linear combinations of the vectors x1, x2, … , xn in S,

{k1x1 + k2x2 + … + knxn},

where the ki, i = 1, 2, … , n are scalars, is called the span of the vectors and written Span(S) or Span(x1, x2, … , xn). It is left as an exercise to show that Span(S) is a subspace of the vector space V. See Problem 33 in Exercises 7.6. Span(S) is said to be a subspace spanned by the vectors x1, x2, … , xn. If V = Span(S), then we say that S is a spanning set for the vector space V, or that S spans V. For example, each of the three sets

{i, j, k}, {i, i + j, i + j + k}, and {i, j, k, i + j, i + j + k}

are spanning sets for the vector space R3. But note that the first two sets are linearly independent, whereas the third set is dependent. With these new concepts we can rephrase Definitions 7.6.4 and 7.6.5 in the following manner:

A set S of vectors {x1, x2, … , xn} in a vector space V is a basis for V if S is linearly independent and is a spanning set for V. The number of vectors in this spanning set S is the dimension of the space V.

REMARKS

(i) Suppose V is an arbitrary real vector space. If there is an inner product defined on V it need not look at all like the standard or Euclidean inner product defined on Rn. In Chapter 12 we will work with an inner product that is a definite integral. We shall denote an inner product that is not the Euclidean inner product by the symbol (u, v). See Problems 30, 31, and 38(b) in Exercises 7.6.

(ii) A vector space V on which an inner product has been defined is called an inner product space. A vector space V can have more than one inner product defined on it. For example, a non-Euclidean inner product defined on R2 is (u, v) = u1v1 + 4u2v2, where u = 〈u1, u2〉 and v = 〈v1, v2〉. See Problems 37 and 38(a) in Exercises 7.6.

(iii) A lot of our work in the later chapters in this text takes place in an infinite-dimensional vector space. As such, we need to extend the definition of linear independence of a finite set of vectors S = {x1, x2, … , xn} given in Definition 7.6.3 to an infinite set:

An infinite set of vectors S = {x1, x2, …} is said to be linearly independent if every finite subset of the set S is linearly independent. If the set S is not linearly independent, then it is linearly dependent.

We note that if S contains a linearly dependent subset, then the entire set S is linearly dependent.

The vector space P of all polynomials has the standard basis B = {1, x, x2, …}. The infinite set B is linearly independent. P is another example of an infinite-dimensional vector space.

7.6 Exercises Answers to selected odd-numbered problems begin on page ANS-17.

In Problems 1–10, determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless stated to the contrary, assume that vector addition and scalar multiplication are the ordinary operations defined on that set.

  1. The set of vectors 〈a1, a2〉, where a1 ≥ 0, a2 ≥ 0
  2. The set of vectors 〈a1, a2〉, where a2 = 3a1 + 1
  3. The set of vectors 〈a1, a2〉, scalar multiplication defined by ka1, a2〉 = 〈ka1, 0〉
  4. The set of vectors 〈a1, a2〉, where a1 + a2 = 0
  5. The set of vectors 〈a1, a2, 0〉
  6. The set of vectors 〈a1, a2〉, addition and scalar multiplication defined by
  7. The set of real numbers, addition defined by x + y = xy
  8. The set of complex numbers a + bi, where i2 = −1, addition and scalar multiplication defined by
  9. The set of arrays of real numbers , addition and scalar multiplication defined by
  10. The set of all polynomials of degree 2

In Problems 11–16, determine whether the given set is a subspace of the vector space C(−, ).

  1. All functions f such that f(1) = 0
  2. All functions f such that f(0) = 1
  3. All nonnegative functions f
  4. All functions f such that f(−x) = f(x)
  5. All differentiable functions f
  6. All functions f of the form f(x) = c1ex + c2xex

In Problems 17–20, determine whether the given set is a subspace of the indicated vector space.

  1. Polynomials of the form p(x) = c3x3 + c1x; P3
  2. Polynomials p that are divisible by x − 2; P2
  3. All unit vectors; R3
  4. Functions f such that f(x) dx = 0; C[a, b]
  5. In 3-space, a line through the origin can be written as S = {(x, y, z) | x = at, y = bt, z = ct, a, b, c real numbers}. With addition and scalar multiplication the same as for vectors 〈x, y, z〉, show that S is a subspace of R3.
  6. In 3-space, a plane through the origin can be written as S = {(x, y, z) | ax + by + cz = 0, a, b, c real numbers}. Show that S is a subspace of R3.
  7. The vectors u1 = 〈1, 0, 0〉, u2 = 〈1, 1, 0〉, and u3 = 〈1, 1, 1〉 form a basis for the vector space R3.
    1. Show that u1, u2, and u3 are linearly independent.
    2. Express the vector a = 〈3, −4, 8〉 as a linear combination of u1, u2, and u3.
  1. The vectors p1(x) = x + 1, p2(x) = x − 1 form a basis for the vector space P1.
    1. Show that p1(x) and p2(x) are linearly independent.
    2. Express the vector p(x) = 5x + 2 as a linear combination of p1(x) and p2(x).

In Problems 25–28, determine whether the given vectors are linearly independent or linearly dependent.

  1. 〈4, −8〉, 〈−6, 12〉 in R2
  2. 〈1, 1〉, 〈0, 1〉, 〈2, 5〉 in R2
  3. 1, (x + 1), (x + 1)2 in P2
  4. 1, (x + 1), (x + 1)2, x2 in P2
  5. Explain why f(x) = is a vector in C[0, 3] but not a vector in C[−3, 0].
  6. A vector space V on which a dot or inner product has been defined is called an inner product space. An inner product for the vector space C[a, b] is given by

    In C[0, 2π] compute (x, sin x).
  1. The norm of a vector in an inner product space is defined in terms of the inner product. For the inner product given in Problem 30, the norm of a vector is given by = . In C[0, 2π] compute and .
  2. Find a basis for the solution space of
  3. Let {x1, x2, … , xn} be any set of vectors in a vector space V. Show that Span(x1, x2, … , xn) is a subspace of V.

Discussion Problems

  1. Discuss: Is R2 a subspace of R3? Are R2 and R3 subspaces of R4?
  2. In Problem 9, you should have proved that the set M22 of 2 × 2 arrays of real numbers

    or matrices, is a vector space with vector addition and scalar multiplication defined in that problem. Find a basis for M22. What is the dimension of M22?
  3. Consider a finite orthogonal set of nonzero vectors {v1, v2, … , vk} in Rn. Discuss: Is this set linearly independent or linearly dependent?
  4. If u, v, and w are vectors in a vector space V, then the axioms of an inner product (u, v) are
    1. (u, v) = (v, u)
    2. (ku, v) = k(u, v), k a scalar
    3. (u, u) = 0 if u = 0 and (u, u) > 0 if u0
    4. (u, v + w) = (u, v) + (u, w).

    Show that (u, v) = u1v1 + 4u2v2, where u = 〈u1, u2〉 and v = 〈v1, v2〉, is an inner product on R2.
    1. Find a pair of nonzero vectors u and v in R2 that are not orthogonal with respect to the standard or Euclidean inner product u · v, but are orthogonal with respect to the inner product (u, v) in Problem 37.
    2. Find a pair of nonzero functions f and g in C[0, 2π] that are orthogonal with respect to the inner product (f, g) given in Problem 30.