4.2 Inverse Transforms and Transforms of Derivatives

INTRODUCTION

In this section we take a few small steps into an investigation of how the Laplace transform can be used to solve certain types of equations. After we discuss the concept of the inverse Laplace transform and examine the transforms of derivatives we then use the Laplace transform to solve some simple ordinary differential equations.

4.2.1 Inverse Transforms

The Inverse Problem

If F(s) represents the Laplace transform of a function f(t), that is, {f(t)} = F(s), we then say f(t) is the inverse Laplace transform of F(s) and write f(t) = −1{F(s)}. For example, from Examples 1, 2, and 3 in Section 4.1 we have, respectively,

1 = −1, t = −1, and e−3t = −1.

The analogue of Theorem 4.1.1 for the inverse transform is presented next.

THEOREM 4.2.1 Some Inverse Transforms

(a) 1 = −1

(b) tn = −1, n = 1, 2, 3 ...

(c) eat = −1

(d) sin kt = −1

(e) cos kt = −1

(f) sinh kt = −1

(g) cosh kt = ࢤ1

When evaluating inverse transforms, it often happens that a function of s under consideration does not match exactly the form of a Laplace transform F(s) given in a table. It may be necessary to “fix up” the function of s by multiplying and dividing by an appropriate constant.

EXAMPLE 1 Applying Theorem 4.2.1

Find

(a) −1

(b) −1.

SOLUTION

(a) To match the form given in part (b) of Theorem 4.2.1, we identify n + 1 = 5 or n = 4 and then multiply and divide by 4!:

−1 −1 t4.

(b) To match the form given in part (d) of Theorem 4.2.1, we identify k2 = 7 and so k = . We fix up the expression by multiplying and dividing by .

−1 −1

Linearity of the Inverse Laplace Transformation

Analogous to (5) of Section 4.1, the inverse Laplace transform is also a linear transform. Suppose the functions and are piecewise continuous on and of exponential order. Then for constants and we can write

(1)

Like (5) of Section 4.1, (1) extends to any finite linear combination of Laplace transforms.

EXAMPLE 2 Termwise Division and Linearity

Find −1

SOLUTION

We first rewrite the given function of s as two expressions by means of termwise division and then use (1):

(2)

Partial Fractions

Partial fractions play an important role in finding inverse Laplace transforms. The decomposition of a rational expression into component fractions can be done quickly by means of a single command on most computer algebra systems. Indeed, some CASs have packages that implement Laplace transform and inverse Laplace transform commands. But for those of you without access to such software, we will review in this and subsequent sections of this chapter some of the basic algebra in the important cases in which the denominator of a Laplace transform F(s) contains (i) distinct linear factors, (ii) repeated linear factors, and (iii) quadratic polynomials with no real factors.

Although we shall examine each of these cases as this chapter develops, it still might be a good idea for you to consult either a calculus text or a current precalculus text for a more comprehensive review of this theory.

The following example illustrates partial fraction decomposition in the case when the denominator of F(s) is factorable into distinct linear factors.

EXAMPLE 3 Partial Fractions and Linearity

Partial fractions: distinct linear factors in denominator

Find

SOLUTION

There exist unique constants A, B, and C such that

Since the denominators are identical, the numerators are identical:

(3)

By comparing coefficients of powers of s on both sides of the equality, we know that (3) is equivalent to a system of three equations in the three unknowns A, B, and C. However, recall that there is a shortcut for determining these unknowns. If we set s = 1, s = 2, and s = −4 in (3) we obtain, respectively,*

and so A = −, B = , C = . Hence the partial fraction decomposition is

(4)

and thus, from the linearity of −1 and part (c) of Theorem 4.2.1,

(5)

4.2.2 Transforms of Derivatives

Transform of a Derivative

As pointed out in the introduction to this chapter, our immediate goal is to use the Laplace transform to solve differential equations. To that end we need to evaluate quantities such as {dy/dt} and {d2y/dt2}.

To begin, we consider the case when the function f is continuous, f′ is piecewise continuous on [), and both functions are of exponential order with c as specified in Definition 4.1.2. For simplicity, we also assume that f′ has a single point of discontinuity t0 on the interval [). Then integration by parts gives

Because the function f is of exponential order, the term as for , and the last term is recognized as Using the notation we have shown that

(6)

The result in (6) can be used recursively to find Laplace transforms of higher derivatives. For example, if and in are replaced by and we get

or

(7)

Similarly, yields

(8)

The results in (6), (7), and (8) are special cases of the following theorem.

THEOREM 4.2.2 Transform of a Derivative

Suppose f, f′, ..., f(n−1) are continuous on [0, ∞), f(n) is piecewise continuous on [0, ∞), and f, f′, ..., f(n−1), f(n) are of exponential order with c as specified in Definition 4.1.2. Then for s > c,

where F(s) = {f(t)}.

Solving Linear ODEs

It is apparent from the general result given in Theorem 4.2.2 that {dny/dtn} depends on Y(s) = {y(t)} and the n − 1 derivatives of y(t) evaluated at t = 0. This property makes the Laplace transform ideally suited for solving linear initial-value problems in which the differential equation has constant coefficients. Such a differential equation is simply a linear combination of terms y, y′, y″, ..., y(n):

where the coefficients ai, i = 0, 1, ..., n and y0, y1, ..., yn−1 are constants. By the linearity property, the Laplace transform of this linear combination is a linear combination of Laplace transforms:

(9)

From Theorem 4.2.2, (9) becomes

(10)

where {y(t)} = Y(s) and {g(t)} = G(s). In other words:

The Laplace transform of a linear differential equation with constant coefficients becomes an algebraic equation in Y(s).

If we solve the general transformed equation (10) for the symbol Y(s), we first obtain P(s)Y(s) = Q(s) + G(s), and then write

(11)

where P(s) = ansn + an−1sn−1 + … + a0, Q(s) is a polynomial in s of degree less than or equal to n − 1 consisting of the various products of the coefficients ai, i = 1, ..., n, and the prescribed initial conditions y0, y1, ..., yn−1, and G(s) is the Laplace transform of g(t).* Typically we put the two terms in (11) over the least common denominator and then decompose the expression into two or more partial fractions. Finally, the solution y(t) of the original initial-value problem is y(t) = −1{Y(s)}, where the inverse transform is done term by term.

The procedure is summarized in FIGURE 4.2.1.

Four steps in solving an I V P by the Laplace transform are given as follows: Step 1. Find unknown y(t) that satisfies a D E and initial conditions. Step 1 to step 2 is labeled: Apply Laplace transform L. Step 2: Transformed D E becomes an algebraic equation in Y(s). Step 3. Solve transformed equation for Y(s). Step 3 to step 4 is labeled: Apply inverse transform L^negative 1. Step 4. Solution y(t) of original I V P.

FIGURE 4.2.1 Steps in solving an IVP by the Laplace transform

The next example illustrates the foregoing method of solving DEs.

EXAMPLE 4 Solving a First-Order IVP

Use the Laplace transform to solve the initial-value problem

+ 3y = 13 sin 2t, y(0) = 6.

SOLUTION

We first take the transform of each member of the differential equation:

+ 3 {y} = 13 {sin 2t}.(12)

But from (6), {dy/dt} = sY(s) − y(0) = sY(s) − 6, and from part (d) of Theorem 4.1.1, {sin 2t} = 2/(s2 + 4), and so (12) is the same as

sY(s) − 6 + 3Y(s) = or (s + 3)Y(s) = 6 + .

Solving the last equation for Y(s), we get

(13)

Partial fractions: quadratic polynomial with no real factors

Since the quadratic polynomial s2 + 4 does not factor using real numbers, its assumed numerator in the partial fraction decomposition is a linear polynomial in s:

Putting the right side of the equality over a common denominator and equating numerators gives 6s2 + 50 = A(s2 + 4)+ (Bs + C)(s + 3). Setting s = −3 then yields immediately A = 8. Since the denominator has no more real zeros, we equate the coefficients of s2 and s: 6 = A + B and 0 = 3B + C. Using the value of A in the first equation gives B = −2, and then using this last value in the second equation gives C = 6. Thus

We are not quite finished because the last rational expression still has to be written as two fractions. But this was done by termwise division in Example 2. From (2) of that example,

It follows from parts (c), (d), and (e) of Theorem 4.2.1 that the solution of the initial-value problem is .

EXAMPLE 5 Solving a Second-Order IVP

Solve y″ − 3y′ + 2y = e−4t, y(0) = 1, y′(0) = 5.

SOLUTION

Proceeding as in Example 4, we transform the DE by taking the sum of the transforms of each term, use (6) and (7), use the given initial conditions, use part (c) of Theorem 4.1.1, and then solve for Y(s):

(14)

and so y(t) = {Y(s)}. The details of the decomposition of Y(s) in (14) into partial fractions have already been carried out in Example 3. In view of (4) and (5) the solution of the initial-value problem is

Examples 4 and 5 illustrate the basic procedure for using the Laplace transform to solve a linear initial-value problem, but these examples may appear to demonstrate a method that is not much better than the approach to such problems outlined in Sections 2.3 and 3.3–3.6. Don’t draw any negative conclusions from the two examples. Yes, there is a lot of algebra inherent in the use of the Laplace transform, but observe that we do not have to use variation of parameters or worry about the cases and algebra in the method of undetermined coefficients. Moreover, since the method incorporates the prescribed initial conditions directly into the solution, there is no need for the separate operations of applying the initial conditions to the general solution y = c1y1 + c2y2 + + cn yn + yp of the DE to find specific constants in a particular solution of the IVP.

The Laplace transform has many operational properties. We will examine some of these properties and illustrate how they enable us to solve problems of greater complexity in the sections that follow.

We conclude this section with a little bit of additional theory related to the types of functions of s that we will generally be working with. The next theorem indicates that not every arbitrary function of s is a Laplace transform of a piecewise-continuous function of exponential order.

THEOREM 4.2.3 Behavior of F(s) as s → ∞

If a function f is piecewise continuous on [0, ∞) and of exponential order with c as specified in Definition 4.1.2 and {f(t)} = F(s), then .

PROOF:

Since f(t) is piecewise continuous on the closed interval [0, T], it is necessarily bounded on the interval. That is, |f(t)| ≤ M1 = M1e0t. Also, because f is assumed to be of exponential order, there exist constants γ, M2 > 0, and T > 0, such that |f(t)| ≤ M2eγt for t > T. If M denotes the maximum of {M1, M2} and c denotes the maximum of {0, γ}, then

for s > c. As s → ∞, we have |{f(t)}| → 0, and so {f(t)} → 0.

As a consequence of Theorem 4.2.3 we can say that functions of s such as F1(s) = 1 and F2(s) = s/(s + 1) are not the Laplace transforms of piecewise-continuous functions of exponential order since F1(s) 0 and F2(s) 0 as s → ∞. But you should not conclude from this that F1(s) and F2(s) are not Laplace transforms. There are other kinds of functions.

REMARKS

(i) The inverse Laplace transform of a function F(s) may not be unique; in other words, it is possible that {f1(t)} = {f2(t)} and yet f1f2. For our purposes this is not anything to be concerned about. If f1 and f2 are piecewise continuous on [0, ∞) and of exponential order, then f1 and f2 are essentially the same. See Problem 56 in Exercises 4.2. However, if f1 and f2 are continuous on [0, ∞) and {f1(t)} = {f2(t)}, then f1 = f2 on the interval.

(ii) This remark is for those of you who will be required to do partial fraction decompositions by hand. There is another way of determining the coefficients in a partial fraction decomposition in the special case when {f(t)} = F(s) is a rational function of s and the denominator of F is a product of distinct linear factors. Let us illustrate by reexamining Example 3. Suppose we multiply both sides of the assumed decomposition

(15)

by, say, s − 1, simplify, and then set s = 1. Since the coefficients of B and C on the right side of the equality are zero, we get

Written another way,

where we have colored or covered up the factor that canceled when the left side was multiplied by s − 1. Now to obtain B and C we simply evaluate the left-hand side of (15) while covering up, in turn, s − 2 and s + 4:

The desired decomposition (15) is given in (4). This special technique for determining coefficients is naturally known as the cover-up method.

(iii) In this remark we continue our introduction to the terminology of dynamical systems. Because of (9) and (10) the Laplace transform is well adapted to linear dynamical systems. In (11) the polynomial P(s) = ansn + an−1sn−1 + + a0 is the total coefficient of Y(s) in (10) and is simply the left-hand side of the DE with the derivatives dky/dtk replaced by powers sk, k = 0, 1, … , n. It is usual practice to call the reciprocal of P(s), namely, W(s) = 1/P(s), the transfer function of the system and write (11) as

Y(s) = W(s)Q(s) + W(s)G(s).(16)

In this manner we have separated, in an additive sense, the effects on the response that are due to the initial conditions (that is, W(s)Q(s)) and to the input function g(that is, W(s)G(s)). See (13) and (14). Hence the response y(t) of the system is a superposition of two responses

y(t) = −1{W(s)Q(s)} + −1{W(s)G(s)} = y0(t) + y1(t).

If the input is g(t) = 0, then the solution of the problem is y0(t) = −1{W(s)Q(s)}. This solution is called the zero-input response of the system. On the other hand, the function y1(t) = −1{W(s)G(s)} is the output due to the input g(t). Now if the initial state of the system is the zero state (all the initial conditions are zero), then Q(s) = 0, and so the only solution of the initial-value problem is y1(t). The latter solution is called the zero-state response of the system. Both y0(t) and y1(t) are particular solutions: y0(t) is a solution of the IVP consisting of the associated homogeneous equation with the given initial conditions, and y1(t) is a solution of the IVP consisting of the nonhomogeneous equation with zero initial conditions. In Example 5, we see from (14) that the transfer function is W(s) = 1/(s2 − 3s + 2), the zero-input response is

and the zero-state response is

Verify that the sum of y0(t) and y1(t) is the solution y(t) in that example and that y0(0) = 1, (0) = 5, whereas y1(0) = 0, (0) = 0.

4.2 Exercises Answers to selected odd-numbered problems begin on page ANS-9.

4.2.1 Inverse Transforms

In Problems 1–30, use Theorem 4.2.1 to find the given inverse transform.

In Problems 31 and 32, find the given inverse Laplace transform by finding the Laplace transform of the indicated function f.

  1. ; f(t) = a sin btb sin at
  2. ; f(t) = cos bt − cos at

4.2.2 Transforms of Derivatives

In Problems 33–46, use the Laplace transform to solve the given initial-value problem.

  1. y = 1, y(0) = 0
  2. 2 + y = 0, y(0)= −3
  3. y′ + 6y = e4t, y(0) = 2
  4. y′y = 2 cos 5t, y(0) = 0
  5. y″ + 5y′ + 4y = 0, y(0) = 1, y′(0) = 0
  6. y″ − 4y′ = 6e3t − 3et, y(0) = 1, y′(0) = −1
  7. y″ + y = t, y(0) = 10, y′(0) = 0
  8. y″ + 9y = et, y(0) = 0, y′(0) = 0
  9. 2y‴ + 3y″ − 3y′ − 2y = et, y(0) = 0, y′(0) = 0, y″(0) = 1
  10. y‴ + 2y″y′ − 2y = sin 3t, y(0) = 0, y′(0) = 0, y″(0) = 1

In Problems 47 and 48, use one of the inverse Laplace transforms found in Problems 31 and 32 to solve the given initial-value problem.

The inverse forms of the results in Problem 53 in Exercises 4.1 are

and

In Problems 49 and 50, use the Laplace transform and these inverses to solve the given initial-value problem.

  1. y′ + y = e−3t cos 2t, y(0) = 0
  2. y″ − 2y′ + 5y = 0, y(0) = 1, y′(0) = 3

In Problems 51 and 52, use the table of Laplace transforms in Appendix C to solve the given initial-value problem.

  1. y(4) + 16y = 0, y(0) = 0, y′(0) = 0, y″(0) = 1, y‴(0) = 0
  2. y(4) + 2y″ + y = 0, y(0) = 0, y′(0) = 1, y″(0) = 0, y‴(0) = 0

Discussion Problems

  1. Using the transform in (6) is the same as

    With discuss how this result in conjunction with (c) of Theorem 4.1.1 can be used to find

  2. Proceed as in Problem 53, but this time discuss how to use (7) with in conjunction with (d) and (e) of Theorem 4.1.1 to find
  3. Suppose is continuous for and of exponential order. If then use (6) to show that where
  4. Make up two functions f1 and f2 that have the same Laplace transform. Do not think profound thoughts.

 

*The numbers 1, 2, and −4 are the zeros of the common denominator (s − 1)(s − 2)(s + 4).

*The polynomial P(s) is the same as the nth degree auxiliary polynomial in (13) in Section 3.3, with the usual symbol m replaced by s.