6  Power Series

6.1 From Polynomials to Power Series

In the preceding chapter, we established that polynomials form a vector space. The space \mathcal{P}_n of polynomials of degree at most n has dimension n+1, with standard basis \{1, x, x^2, \ldots, x^n\}. Every polynomial in \mathcal{P}_n is a finite linear combination: p(x) = c_0 \cdot 1 + c_1 \cdot x + c_2 \cdot x^2 + \cdots + c_n \cdot x^n = \sum_{k=0}^{n} c_k x^k.

The space \mathcal{P} of all polynomials is infinite-dimensional, but each element uses only finitely many basis functions. The coordinate vector (c_0, c_1, c_2, \ldots) has only finitely many nonzero entries.

Power series remove this restriction. A power series f(x) = \sum_{n=0}^{\infty} c_n x^n = c_0 + c_1 x + c_2 x^2 + c_3 x^3 + \cdots allows infinitely many nonzero coefficients. This is an infinite linear combination in the basis \{1, x, x^2, \ldots\}.

The central question: when does such an infinite sum define a function? For each fixed x, we obtain a numerical series \sum c_n x^n. Whether this converges depends on both the coefficients c_n and the value of x. The theory of series convergence, developed in previous sections, determines the answer.


6.2 Power Series as Functions

Definition 6.1 (Power Series) A power series centered at a is a formal infinite linear combination \sum_{n=0}^{\infty} c_n (x-a)^n = c_0 + c_1(x-a) + c_2(x-a)^2 + c_3(x-a)^3 + \cdots, where c_n \in \mathbb{R} are coefficients and x is a variable.

For each fixed x, this produces a numerical series. The set of all x where the series converges is the domain of convergence.

Contrast with polynomials:

  • Polynomial: Finite sum, converges everywhere, always defines a function

  • Power series: Infinite sum, converges on a subset, defines a function only where convergent

Geometric interpretation: Just as a polynomial p(x) = \sum_{k=0}^{n} c_k x^k represents a vector in the (n+1)-dimensional space \mathcal{P}_n, a power series f(x) = \sum_{n=0}^{\infty} c_n x^n represents a vector in an infinite-dimensional function space. The coordinates (c_0, c_1, c_2, \ldots) form an infinite sequence, and convergence determines whether this infinite linear combination produces a well-defined function.

Convention. We focus on series centered at a = 0. The general case \sum c_n (x-a)^n reduces via substitution u = x-a.


6.3 Three Fundamental Examples

Before developing general theory, examine three power series with distinct convergence behavior.

Example 6.1 (Convergence Only at Origin) Consider \sum_{n=0}^{\infty} n! x^n. For any x \neq 0, apply the ratio test: \frac{(n+1)!|x|^{n+1}}{n!|x|^n} = (n+1)|x| \to \infty. The series diverges for all x \neq 0, converging only at the origin.

Example 6.2 (Convergence Everywhere) Consider \sum_{n=0}^{\infty} \frac{x^n}{n!}. For any fixed x, apply the ratio test: \frac{|x|^{n+1}/(n+1)!}{|x|^n/n!} = \frac{|x|}{n+1} \to 0 < 1. The series converges absolutely for all x \in \mathbb{R}. (This represents e^x.)

Example 6.3 (Convergence on an Interval) The geometric series \sum_{n=0}^{\infty} x^n converges for |x| < 1 and diverges for |x| \geq 1. From Section 2.6, when |x| < 1: \sum_{n=0}^{\infty} x^n = \frac{1}{1-x}.

These examples suggest that convergence is determined by distance from center. The next section makes this precise.


6.4 Radius of Convergence

A fundamental observation: if \sum c_n x_0^n converges for some x_0 \neq 0, then the series converges absolutely for all |x| < |x_0|.

Lemma 6.1 (Comparison Lemma) If \sum c_n x_0^n converges for some x_0 \neq 0, then \sum c_n x^n converges absolutely for all x with |x| < |x_0|.

Convergence of \sum c_n x_0^n implies c_n x_0^n \to 0, hence the sequence \{c_n x_0^n\} is bounded: |c_n x_0^n| \leq M for some M > 0.

For |x| < |x_0|, define r = |x|/|x_0| < 1. Then |c_n x^n| = |c_n x_0^n| \cdot \left|\frac{x}{x_0}\right|^n \leq M r^n. Since \sum M r^n is geometric with ratio r < 1, it converges. By comparison, \sum |c_n x^n| converges. \square

Contrapositive. If \sum c_n x_0^n diverges, then \sum c_n x^n diverges for all |x| > |x_0|.

These results establish that convergence is determined by distance from the center.

Theorem 6.1 (Existence of Radius of Convergence) For any power series \sum_{n=0}^{\infty} c_n x^n, exactly one of the following holds:

  1. The series converges only for x = 0 (set R = 0)

  2. The series converges for all x \in \mathbb{R} (set R = \infty)

  3. There exists R \in (0, \infty) such that the series converges absolutely for |x| < R and diverges for |x| > R

The value R is called the radius of convergence.

Define S = \{|x| : \sum c_n x^n \text{ converges}\}. This set is nonempty since 0 \in S.

Case 1: If S = \{0\}, we have case (i).

Case 2: If S is unbounded, then for any M > 0, there exists x_0 with |x_0| > M and \sum c_n x_0^n convergent. By the comparison lemma, \sum c_n x^n converges absolutely for all |x| < |x_0|, hence for all |x| < M. Since M was arbitrary, we have case (ii).

Case 3: Otherwise, S is nonempty, bounded, and contains points other than 0. Let R = \sup S by completeness of \mathbb{R}. Then 0 < R < \infty.

If |x| < R, then |x| is not an upper bound for S, so there exists x_0 with |x| < |x_0| and \sum c_n x_0^n convergent. By the comparison lemma, \sum c_n x^n converges absolutely.

If |x| > R, suppose for contradiction that \sum c_n x^n converges. Then |x| \in S, contradicting that R is an upper bound. Thus the series diverges. \square

The radius completely determines convergence except at the boundary points x = \pm R, where separate testing is required.


6.5 Computing the Radius

Two formulas compute R directly from coefficients, derived by applying ratio and root tests to the power series.

Theorem 6.2 (Ratio Formula) If \lim_{n \to \infty} \left|\frac{c_{n+1}}{c_n}\right| = L exists (possibly infinite), then R = \begin{cases} 1/L & \text{if } 0 < L < \infty \\ \infty & \text{if } L = 0 \\ 0 & \text{if } L = \infty \end{cases}

Apply the ratio test to \sum c_n x^n with x fixed: \lim_{n \to \infty} \frac{|c_{n+1} x^{n+1}|}{|c_n x^n|} = |x| \lim_{n \to \infty} \left|\frac{c_{n+1}}{c_n}\right| = L|x|. The ratio test gives absolute convergence when L|x| < 1, i.e., |x| < 1/L. Thus R = 1/L.

When L = 0, the inequality holds for all x, giving R = \infty. When L = \infty, it holds only for x = 0, giving R = 0. \square

Theorem 6.3 (Root Formula) If \lim_{n \to \infty} \sqrt[n]{|c_n|} = L exists (possibly infinite), then R = \begin{cases} 1/L & \text{if } 0 < L < \infty \\ \infty & \text{if } L = 0 \\ 0 & \text{if } L = \infty \end{cases}

The proof follows the same pattern using the root test.

Computation Examples

Example 6.4 (Computing Radius for \sum \frac{x^n}{n}) For \sum_{n=1}^{\infty} \frac{x^n}{n}: \lim_{n \to \infty} \frac{n}{n+1} = 1, so R = 1.

Example 6.5 (Computing Radius for \sum \frac{x^n}{n!}) For \sum_{n=0}^{\infty} \frac{x^n}{n!}: \lim_{n \to \infty} \frac{n!}{(n+1)!} = 0, so R = \infty.

Example 6.6 (Computing Radius for \sum n! x^n) For \sum_{n=0}^{\infty} n! x^n: \lim_{n \to \infty} \frac{(n+1)!}{n!} = \infty, so R = 0.


6.6 The Interval of Convergence

The radius R determines convergence on the open interval (-R, R). At endpoints x = \pm R (when R < \infty), the series must be tested separately.

Definition 6.2 (Interval of Convergence) The interval of convergence is the set of all x where \sum c_n x^n converges. It has one of the forms: \{0\}, \quad (-R, R), \quad [-R, R), \quad (-R, R], \quad [-R, R], \quad \mathbb{R}

Procedure:

  1. Compute R using ratio or root formula

  2. Test convergence at x = R (if R < \infty)

  3. Test convergence at x = -R (if R < \infty)

  4. State the interval, including/excluding endpoints as appropriate

Example 6.7 (Interval of Convergence) Consider \sum_{n=1}^{\infty} \frac{x^n}{n}. We found R = 1.

At x = 1: Series becomes \sum \frac{1}{n} (harmonic series), which diverges.

At x = -1: Series becomes \sum \frac{(-1)^n}{n}, which converges by alternating series test.

Interval of convergence: [-1, 1).


6.7 Vector Space of Analytic Functions

A power series with R > 0 defines a function on its interval of convergence: f(x) = \sum_{n=0}^{\infty} c_n x^n.

The set of all such functions forms a vector space under pointwise operations.

Definition 6.3 (Analytic Function) A function f is analytic at a if there exists R > 0 and coefficients c_n such that f(x) = \sum_{n=0}^{\infty} c_n (x-a)^n for all x with |x-a| < R.

The space of functions analytic at a is denoted \mathcal{A}_a.

Vector space structure:

  1. Addition: If f(x) = \sum a_n x^n with radius R_1 and g(x) = \sum b_n x^n with radius R_2, then (f+g)(x) = \sum (a_n + b_n) x^n with radius at least \min(R_1, R_2).

  2. Scalar multiplication: (\lambda f)(x) = \sum (\lambda a_n) x^n has the same radius as f.

  3. Zero element: The constant function 0(x) = 0 corresponds to the series with all coefficients zero.

The space \mathcal{A}_0 of functions analytic at 0 strictly contains the polynomial space \mathcal{P}. Functions like e^x, \sin(x), and \frac{1}{1-x} have power series representations but are not polynomials.


6.8 Differentiation

Power series inherit the smoothness of polynomials. Just as differentiating \sum_{k=0}^{n} c_k x^k gives \sum_{k=1}^{n} kc_k x^{k-1}, we can differentiate infinite sums term-by-term.

Theorem 6.4 (Term-by-Term Differentiation) Let f(x) = \sum_{n=0}^{\infty} c_n x^n with radius R > 0. Then f is differentiable on (-R, R), and f'(x) = \sum_{n=1}^{\infty} n c_n x^{n-1}. The differentiated series has the same radius R.

Proof sketch. Apply the ratio test to \sum n c_n x^{n-1}: \lim_{n \to \infty} \frac{(n+1)|c_{n+1}|}{n|c_n|} = \lim_{n \to \infty} \frac{n+1}{n} \cdot \frac{|c_{n+1}|}{|c_n|} = 1 \cdot L = L. The radii match. The full proof requires uniform convergence.

Consequence. Power series are C^\infty (infinitely differentiable): f^{(k)}(x) = \sum_{n=k}^{\infty} n(n-1)\cdots(n-k+1) c_n x^{n-k}.


6.9 Uniqueness

If a function equals a power series, the coefficients are uniquely determined.

Theorem 6.5 (Uniqueness of Coefficients) If f(x) = \sum_{n=0}^{\infty} c_n x^n for all x in some interval (-r, r) with r > 0, then c_n = \frac{f^{(n)}(0)}{n!}.

Differentiate k times and evaluate at x = 0: f^{(k)}(x) = \sum_{n=k}^{\infty} n(n-1)\cdots(n-k+1) c_n x^{n-k}. At x = 0, all terms vanish except n = k: f^{(k)}(0) = k! c_k. \quad \square

The coefficients encode the derivatives at the center. In the normalized basis \{1, x, \frac{x^2}{2!}, \frac{x^3}{3!}, \ldots\}, the coordinates of f are precisely (f(0), f'(0), f''(0), \ldots).

This connects to the polynomial framework: just as polynomials have coordinates in \{1, x, x^2, \ldots\}, analytic functions have infinite coordinate vectors determined by their derivatives.


6.10 Summary

Power series extend polynomials from finite to infinite linear combinations. The theory developed here establishes:

  1. Convergence domains: Every power series has a radius R \in [0, \infty] determining where it converges

  2. Vector space structure: Analytic functions form a vector space under pointwise operations

  3. Calculus operations: Power series can be differentiated term-by-term within their radius of convergence (term-by-term integration is developed in the integration chapters)

  4. Uniqueness: Coefficients are determined by derivatives at the center

The next chapter develops Taylor series, showing how to construct power series representations for C^\infty functions by extracting their derivatives at a point.

  1. Computing radius and interval of convergence.

    1. For the power series \sum_{n=1}^{\infty} \frac{x^n}{n^2}, use Theorem 6.2 to compute the radius of convergence R.

    2. Test convergence at the endpoints x = R and x = -R by examining the resulting numerical series at each point.

    3. State the interval of convergence, indicating whether each endpoint is included or excluded. Justify your answer using the endpoint tests from part (b).

  2. Differentiation of power series.

    1. The geometric series f(x) = \sum_{n=0}^{\infty} x^n converges to \frac{1}{1-x} for |x| < 1. Use Theorem 6.4 to find f'(x) by differentiating the series term-by-term.

    2. Compute the derivative of \frac{1}{1-x} directly using the quotient rule from Calculus I.

    3. Verify that your answers from parts (a) and (b) agree, confirming that term-by-term differentiation produces the correct result for |x| < 1.

    4. Use your result from part (a) to find a power series representation for \frac{1}{(1-x)^2} by identifying the coefficients in the series \sum_{n=0}^{\infty} c_n x^n.

  3. In this problem, you will prove that power series representation is unique.

    1. Let f(x) = \sum_{n=0}^{\infty} c_n x^n converge on some interval (-r, r) with r > 0. Evaluate f(0) by substituting x = 0 into the power series. What does this tell you about c_0?

    2. By Theorem 6.4, we can differentiate term-by-term to obtain f'(x) = \sum_{n=1}^{\infty} n c_n x^{n-1} = c_1 + 2c_2 x + 3c_3 x^2 + \cdots Evaluate f'(0) and determine the value of c_1.

    3. Differentiate f twice to obtain f''(x), then evaluate at x = 0 to find c_2 in terms of f''(0).

    4. Generalize the pattern from parts (a)–(c) to prove that c_n = \frac{f^{(n)}(0)}{n!} for all n \geq 0. Explain why this shows that the power series representation of f is unique—that is, if f(x) = \sum_{n=0}^{\infty} a_n x^n = \sum_{n=0}^{\infty} b_n x^n on some interval, then a_n = b_n for all n.

  1. Radius and interval of convergence for \sum_{n=1}^{\infty} \frac{x^n}{n^2}.

    1. Computing the radius. We apply Theorem 6.2 with c_n = \frac{1}{n^2}. Compute \lim_{n \to \infty} \left|\frac{c_{n+1}}{c_n}\right| = \lim_{n \to \infty} \frac{1/(n+1)^2}{1/n^2} = \lim_{n \to \infty} \frac{n^2}{(n+1)^2} = \lim_{n \to \infty} \left(\frac{n}{n+1}\right)^2.

      Since \frac{n}{n+1} = \frac{1}{1 + 1/n} \to 1 as n \to \infty, we have \left(\frac{n}{n+1}\right)^2 \to 1^2 = 1.

      By Theorem 6.2 with L = 1, the radius of convergence is R = \frac{1}{L} = \frac{1}{1} = 1. \quad \square

    2. Testing the endpoints. We test x = 1 and x = -1 separately.

      At x = 1: The series becomes \sum_{n=1}^{\infty} \frac{1^n}{n^2} = \sum_{n=1}^{\infty} \frac{1}{n^2}.

      This is a p-series with p = 2 > 1, which converges (see comparison with the integral test or by the p-series test).

      At x = -1: The series becomes \sum_{n=1}^{\infty} \frac{(-1)^n}{n^2}.

      Since \sum \frac{1}{n^2} converges (from above), and all terms have the same sign structure as an alternating series with decreasing magnitude, this series converges absolutely. (In fact, absolute convergence follows immediately since \left|\frac{(-1)^n}{n^2}\right| = \frac{1}{n^2}, and \sum \frac{1}{n^2} converges.)

      Both endpoints yield convergent series. \square

    3. Interval of convergence. From part (a), the series converges absolutely for |x| < 1, which means x \in (-1, 1).

      From part (b), the series converges at both x = 1 and x = -1. Therefore, the interval of convergence is [-1, 1].

      Both endpoints are included because the series converges at each. \square

  2. Differentiation of power series and the geometric series.

    1. Term-by-term differentiation. We have f(x) = \sum_{n=0}^{\infty} x^n. By Theorem 6.4, we may differentiate term-by-term: f'(x) = \sum_{n=1}^{\infty} n x^{n-1}.

      Reindexing by letting m = n-1 (so n = m+1 and the sum starts at m = 0): f'(x) = \sum_{m=0}^{\infty} (m+1) x^m = \sum_{n=0}^{\infty} (n+1) x^n.

      Alternatively, we can write this directly as f'(x) = 1 + 2x + 3x^2 + 4x^3 + \cdots = \sum_{n=0}^{\infty} (n+1)x^n. \quad \square

    2. Direct computation using calculus. We know from Example 6.3 that f(x) = \frac{1}{1-x} for |x| < 1. Using the quotient rule (or the chain rule applied to (1-x)^{-1}): f'(x) = \frac{d}{dx}\left[(1-x)^{-1}\right] = -(1-x)^{-2} \cdot (-1) = \frac{1}{(1-x)^2}. \quad \square

    3. Verification of agreement. From part (a), we found f'(x) = \sum_{n=0}^{\infty} (n+1)x^n.

      From part (b), we found f'(x) = \frac{1}{(1-x)^2}.

      These must be equal for |x| < 1. To see this, observe that the geometric series \sum_{n=0}^{\infty} x^n = \frac{1}{1-x} can be differentiated term-by-term (by Theorem 6.4), yielding \sum_{n=1}^{\infty} nx^{n-1}. Multiplying both sides of the original geometric series identity by \frac{1}{1-x} and differentiating confirms the equivalence.

      Alternatively, we can verify numerically: at x = 1/2, f'(1/2) = \frac{1}{(1-1/2)^2} = \frac{1}{(1/2)^2} = 4, and \sum_{n=0}^{\infty} (n+1)\left(\frac{1}{2}\right)^n = 1 + 2 \cdot \frac{1}{2} + 3 \cdot \frac{1}{4} + 4 \cdot \frac{1}{8} + \cdots converges to the same value. The agreement confirms that term-by-term differentiation produces the correct derivative. \square

    4. Power series for \frac{1}{(1-x)^2}. From part (b), we established that \frac{1}{(1-x)^2} = f'(x) = \sum_{n=0}^{\infty} (n+1)x^n.

      Therefore, the power series representation is \frac{1}{(1-x)^2} = \sum_{n=0}^{\infty} (n+1)x^n = 1 + 2x + 3x^2 + 4x^3 + \cdots, with coefficients c_n = n+1 for n \geq 0. This series converges for |x| < 1, the same radius as the original geometric series. \square

  3. Proving uniqueness of power series representation.

    1. Finding c_0. Substituting x = 0 into the power series: f(0) = \sum_{n=0}^{\infty} c_n \cdot 0^n = c_0 + c_1 \cdot 0 + c_2 \cdot 0 + \cdots = c_0.

      All terms vanish except the first. Therefore, c_0 = f(0). \quad \square

    2. Finding c_1. By Theorem 6.4, differentiating term-by-term gives f'(x) = \sum_{n=1}^{\infty} n c_n x^{n-1} = c_1 + 2c_2 x + 3c_3 x^2 + \cdots

      Evaluating at x = 0: f'(0) = c_1 + 2c_2 \cdot 0 + 3c_3 \cdot 0 + \cdots = c_1.

      Therefore, c_1 = f'(0). \quad \square

    3. Finding c_2. Differentiating f'(x) term-by-term: f''(x) = \sum_{n=2}^{\infty} n(n-1) c_n x^{n-2} = 2 \cdot 1 \cdot c_2 + 3 \cdot 2 \cdot c_3 x + 4 \cdot 3 \cdot c_4 x^2 + \cdots

      Evaluating at x = 0: f''(0) = 2! \, c_2.

      Solving for c_2: c_2 = \frac{f''(0)}{2!}. \quad \square

    4. General formula and uniqueness. Following the pattern from parts (a)–(c), differentiating f a total of n times yields f^{(n)}(x) = \sum_{k=n}^{\infty} k(k-1)(k-2)\cdots(k-n+1) c_k x^{k-n}.

      At x = 0, all terms vanish except k = n: f^{(n)}(0) = n(n-1)(n-2)\cdots 2 \cdot 1 \cdot c_n = n! \, c_n.

      Solving for c_n: c_n = \frac{f^{(n)}(0)}{n!}.

      This proves Theorem 6.5: the coefficients of a power series are uniquely determined by the function’s derivatives at x = 0.

      Uniqueness. Suppose f(x) = \sum_{n=0}^{\infty} a_n x^n = \sum_{n=0}^{\infty} b_n x^n on some interval (-r, r) with r > 0. By the formula above, both representations must satisfy a_n = \frac{f^{(n)}(0)}{n!} = b_n for all n \geq 0. Since the coefficients are identical, the two power series are the same. This shows that if a function has a power series representation, that representation is unique. \square