Complex Analysis: #23 Infinite Products
Before we get involved with infinite products of functions, we should first think about something easier, namely infinite products of numbers alone. So let z1, z2 , . . . be a sequence of numbers. These give rise to a sequence of “partial products”
But we should realize that there are some special things to think about here which make things different from the simpler situation with partial sums.
- For example with sums, the convergence of the series is not affected if we change a single term. But with products, if one of the terms zk is changed to 0, then obviously all of the subsequent Pn are zero, regardless of what the further terms look like. Therefore we see that it only makes sense to consider products where all terms are non-zero.
- Another thing is that we could have limn→∞ Pn = 0. While this may not seem to be particularly objectionable at first, it is when one realizes that in this case, the limit again remains unchanged if various terms in the product are changed.
Both of these considerations show that, in a way, the number 0 in a product creates the same problems as does the number ∞ in a sum. So we will just agree to do away with the number zero when thinking about infinite products. However, because some people still find it nice to think about the number zero, the following definition will be used.
Definition 15
Let (zn)n∈ℕ be a sequence of complex numbers which contains at most finitely many zeros. If the sequence of partial products of the non-zero terms converges to a number which is not zero, then we will say that the infinite product is convergent.
It is a rather trivial observation that, for a convergent product of the form ∏n∈ℕ zn, we must have limn→∞ zn = 1. Furthermore, we can assume that at most one of the zn is a negative real number. For it is obvious that if we have two negative numbers, then it is simpler to just take the corresponding positive numbers. In fact, for this reason it is best to simply exclude negative real numbers from our considerations here completely, and if, as a very special case, we find it convenient to multiply things with the number −1, then that can be done at the end of our calculations.
This means that if we multiply numbers of the form zn, then we will assume that we can write zn = rneiθn, with −π < θn < π. Or put another way, we can write log zn = log r + iθn. This is the principal branch of the logarithm.
Theorem 46
Let zk = xk + iyk for all k ∈ ℕ such that if yk = 0 then xk > 0. (That is, all complex numbers are allowed except for real numbers which are not positive.) Then we have that ∏ zk , (with k = 1, . . ., ∞) is convergent if and only if ∑log zk , (with k = 1, . . ., ∞) is convergent (where, of course, we take the principal branch of the logarithm).
Proof
First assume that the sum of the logarithms converge. For example, let
Of course, the logarithm always seems troublesome. Therefore the following theorem reduces things to a level which can be more easily checked.
Theorem 47
Writing zk = 1+ak, we have ∏ (1+ak) , (with k = 1, . . ., ∞) is absolutely convergent (that is, the sum of the absolute values of the logarithms, ∑ |log (1+ak)| , (with k = 1, . . ., ∞) is convergent) if and only if ∑ |ak| , (with k = 1, . . ., ∞) converges.
Proof
Begin by observing that since we have log '(z) = 1/z, it follows that
No comments:
Post a Comment
If it's a past exam question, do not include links to the paper. Only the reference.
Comments will only be published after moderation