Theorem. Suppose
- (an)n≥1 is a non-negative, non-increasing sequence,
- (f(n))n≥1 is a sequence of positive integers such that f(n)→∞, and
- ∑n≥1af(n) converges.
Proof. We partition N1={1,2,…} into two subsets A and B, where A={n∈N1:f(n)>n},B={n∈N1:f(n)≤n}.
Then it suffices to show that both ∑n∈Aanf(n) and ∑n∈Banf(n) are finite.
First sum. Since (an)n∈N1 is non-increasing and 1≤f(n)≤n for each n∈B, we get ∑n∈Banf(n)≤∑n∈Baf(n)<∞.
Second sum. Now enumerate f(N1) as {k1<k2<…}, and define Ai=A∩[ki,ki+1)
for each i. Then the following implication holds true:
n∈Ai⟹f(n)>n≥ki⟹f(n)≥ki+1.
So it follows that
∑n∈Aianf(n)≤∑n∈Aiakiki+1=ki+1−kiki+1aki≤aki.
Summing both sides over i,
∑n∈Aanf(n)≤∞∑i=1aki≤∞∑i=1∑n:f(n)=kiaf(n)=∞∑n=1af(n)<∞.
Therefore the desired conclusion follows.
References.
- [1] Mike, If ∑∞n=1a[f(n)] converges, then ∑∞n=1anf(n) converges., URL (version: 2022-06-05): https://math.stackexchange.com/q/4464970
No comments:
Post a Comment