- #1

- 398

- 0

**Taylor's theorem states that, for any reasonable function f(x), the value of f at a point (x+[tex]\delta[/tex]) can be expressed as an infinite series involving f and its derivatives at the point of x:**

f(x+[tex]\delta[/tex]) = f(x) + f '(x)[tex]\delta[/tex] + [tex]\frac{1}{2!}[/tex]f ''(x)[tex]\delta[/tex][tex]^{2}[/tex] + ....

where the primes denote successive derivatives of f(x). (Depending on the function this series may converge for any increment [tex]\delta[/tex] or only for values of [tex]\delta[/tex] less then some nonzero 'radius of convergence'.) This theorem is enormously useful, especially for small values of [tex]\delta[/tex], when the first one or two terms of the series are often an excellent approximation. Find the taylor's series for ln(1+[tex]\delta[/tex]). Do the same for cos [tex]\delta[/tex].

f(x+[tex]\delta[/tex]) = f(x) + f '(x)[tex]\delta[/tex] + [tex]\frac{1}{2!}[/tex]f ''(x)[tex]\delta[/tex][tex]^{2}[/tex] + ....

where the primes denote successive derivatives of f(x). (Depending on the function this series may converge for any increment [tex]\delta[/tex] or only for values of [tex]\delta[/tex] less then some nonzero 'radius of convergence'.) This theorem is enormously useful, especially for small values of [tex]\delta[/tex], when the first one or two terms of the series are often an excellent approximation. Find the taylor's series for ln(1+[tex]\delta[/tex]). Do the same for cos [tex]\delta[/tex].

We just started Taylor's theorems now, and it seems like there are many of them. Why exactly are they good and what purpose do they serve?

Is finding the Taylor's series for ln (1+[tex]\delta[/tex]) a matter of substituting it into the above equation or is there more to it?

Thanks again!

Last edited: