Processing math: 100%

Friday, June 10, 2022

A tricky convergence problem

This is a slight simplification of the Math.SE user @Mike's wonderful solution [1] that is rephrased in my own words.

Theorem. Suppose

  1. (an)n1 is a non-negative, non-increasing sequence,
  2. (f(n))n1 is a sequence of positive integers such that f(n), and
  3. n1af(n) converges.
Then n1anf(n) also converges.

Proof. We partition N1={1,2,} into two subsets A and B, where A={nN1:f(n)>n},B={nN1:f(n)n}.

Then it suffices to show that both nAanf(n) and nBanf(n) are finite.

First sum. Since (an)nN1 is non-increasing and 1f(n)n for each nB, we get nBanf(n)nBaf(n)<.

Second sum. Now enumerate f(N1) as {k1<k2<}, and define Ai=A[ki,ki+1)

for each i. Then the following implication holds true: nAif(n)>nkif(n)ki+1.
So it follows that nAianf(n)nAiakiki+1=ki+1kiki+1akiaki.
Summing both sides over i, nAanf(n)i=1akii=1n:f(n)=kiaf(n)=n=1af(n)<.
Therefore the desired conclusion follows.


References.
  1. [1] Mike, If n=1a[f(n)] converges, then n=1anf(n) converges., URL (version: 2022-06-05): https://math.stackexchange.com/q/4464970

No comments:

Post a Comment