Processing math: 20%

Friday, June 10, 2022

A tricky convergence problem

This is a slight simplification of the Math.SE user @Mike's wonderful solution [1] that is rephrased in my own words.

Theorem. Suppose

  1. (an)n1 is a non-negative, non-increasing sequence,
  2. (f(n))n1 is a sequence of positive integers such that f(n), and
  3. n1af(n) converges.
Then n1anf(n) also converges.

Proof. We partition N1={1,2,} into two subsets A and B, where A={nN1:f(n)>n},B={nN1:f(n)n}. Then it suffices to show that both nAanf(n) and nBanf(n) are finite.

First sum. Since (an)nN1 is non-increasing and 1f(n)n for each nB, we get nBanf(n)nBaf(n)<.

Second sum. Now enumerate f(N1) as {k1<k2<}, and define Ai=A[ki,ki+1) for each i. Then the following implication holds true: nAif(n)>nkif(n)ki+1. So it follows that nAianf(n)nAiakiki+1=ki+1kiki+1akiaki. Summing both sides over i, nAanf(n)i=1akii=1n:f(n)=kiaf(n)=n=1af(n)<. Therefore the desired conclusion follows.


References.
  1. [1] Mike, If n=1a[f(n)] converges, then n=1anf(n) converges., URL (version: 2022-06-05): https://math.stackexchange.com/q/4464970

A property of an integrable function on R

Theorem. Let fL1(R,dx). Then lim

The proof below is essentially what is shown in [1], written in my own words.

Proof. Fix \varepsilon \gt 0 and define E = \{x \in \mathbb{R} : |f(x)| \geq \varepsilon\}. Then for any [a, b] \subseteq (0, \infty), \begin{align*} \int_{a}^{b} \sum_{n=1}^{\infty} \mathbf{1}_{E}(nx) \, \mathrm{d}x &= \sum_{n=1}^{\infty} \frac{1}{n} \int_{na}^{nb} \mathbf{1}_{E}(t) \, \mathrm{d}t \\ &= \int_{E} \sum_{n=1}^{\infty} \frac{1}{n} \mathbf{1}_{\{na \leq t \leq nb\}} \, \mathrm{d}t \\ &= \int_{E} \sum_{n=1}^{\infty} \frac{1}{n} \mathbf{1}_{\{ t/b \leq n \leq t/a \}} \, \mathrm{d}t \\ &\leq \int_{E} (2 + \log(b/a)) \, \mathrm{d}t \\ &= (2 + \log(b/a)) |E|, \end{align*} where |E| denotes the Lebesgue measure of E. Since f is integrable, |E| \lt \infty. So the sum \sum_{n=1}^{\infty} \mathbf{1}_{E}(nx) is finite and hence nx \in E for only finitely many n for a.e. x in [a, b]. Since [a, b] is arbitrary, the same is true for a.e. x in \mathbb{R}. From this, it is now routine to conclude the desired claim. \square


References
  • [1] Emmanuel Lesigne. On the behavior at infinity of an integrable function. American Mathematical Monthly, Mathematical Association of America, 2010, 117 (2), pp.175-181. ⟨hal-00276738v3⟩

Tuesday, June 7, 2022

A simple but useful estimate

Theorem. Let f : \mathbb{R} \to \mathbb{R} be a continuous, integrable function of bounded variation. Then \sum_{n\in\mathbb{Z}} f(n) converges absolutely, and we have \left| \sum_{n\in \mathbb{Z}} f(n) - \int_{\mathbb{R}} f(x) \, \mathrm{d}x \right| \leq \frac{1}{2} V(f), where V(f) = \int_{\mathbb{R}} |\mathrm{d}f(x)| denotes the total variation of f over \mathbb{R}.

Proof. Let \tilde{B}(x) = x - \lfloor x \rfloor - \frac{1}{2}. Then performing integration by parts, for a \le b, \begin{align*} \int_{a}^{b} f(x) \, \mathrm{d}x - \sum_{n \in [a, b]\cap\mathbb{Z}} f(n) &= \int_{[a, b]} f(x) \, \mathrm{d}\tilde{B}(x) \\ &= [f(x)\tilde{B}(x)]_{a^-}^{b} - \int_{[a,b]} \tilde{B}(x) \, \mathrm{d}f(x). \end{align*} Using the fact that |\tilde{B}(x)| \leq \frac{1}{2}, it therefore follows that \begin{align*} \left| \int_{a}^{b} f(x) \, \mathrm{d}x - \sum_{n \in [a, b]\cap\mathbb{Z}} f(n) \right| \leq \frac{|f(a)| + |f(b)|}{2} + \frac{1}{2} \int_{[a,b]} \, |\mathrm{d}f(x)| \end{align*} From the assumption, it is easy to check that f(x) \to 0 as |x| \to \infty. So, the above estimate shows that \sum_{n\in\mathbb{Z}} f(n) satisfies the Cauchy criterion and hence converges as a \to -\infty and b \to \infty. Moreover, noting that |f| also satisfies the assumption in plcae of f, we find that this sum converges absolutely. Finally, passing to the limit as [a,b] \uparrow \mathbb{R} proves the desired bound. \square

Cauchy–Binet formula

Write [n] for the set \{1,2,\ldots,n\} and \binom{[n]}{k} for the family of all subets of [n] and size k. Also, for an m \times n matrix X, I \subseteq [m], and J \subseteq [n], let X_{I,J} denote the submatrix obtained from X by deleting row i for each i \notin I and column j for each j\notin [J].

Theorem. (Cauchy–Binet formula) Let A be a k\times n matrix and B be an n\times k matrix over the field F. Then \det(AB) = \sum_{S\subseteq\binom{[n]}{k}} \det(A_{[k],S}B_{S,[k]}).

1st Proof. Let (e_1,\ldots, e_m) denote the standard basis of F^m. (For convenience, we will abuse the notation so e_i stands for the ith standard basis vector in any of F^m, m \geq i.) Then \begin{align*} \det(AB)(e_1 \wedge \cdots \wedge e_k) &= \bigl( {\textstyle\bigwedge}^k AB \bigr)(e_1 \wedge \cdots \wedge e_k) \\ &= \bigl( {\textstyle\bigwedge}^k A \bigr)[(Be_1) \wedge \cdots \wedge (Be_k)]. \end{align*} Now by noting that Be_j = \sum_{i=1}^{n} B_{ij} e_i is the jth column of B, we may expand the exterior product in the last line as \begin{align*} &[(Be_1) \wedge \cdots \wedge (Be_k)] \\ &= \sum_{\substack{\rho : [k] \to [n] \\ \sigma \text{ injective} }} (B_{\rho(1),1} e_{\rho(1)}) \wedge \cdots \wedge (B_{\rho(k),k} e_{\rho(k)} ) \\ &= \sum_{\substack{\sigma : [k] \to [n] \\ \sigma \text{ increasing} }} \sum_{\substack{\tau : [k] \to [k] \\ \tau \text{ bijective}}} \left( \prod_{i=1}^{k} B_{\sigma(\tau(i)),i} \right) (e_{\sigma(\tau(1))} \wedge \cdots \wedge e_{\sigma(\tau(k))} ) \\ &= \sum_{\substack{\sigma : [k] \to [n] \\ \sigma \text{ increasing} }} \sum_{\substack{\tau : [k] \to [k] \\ \tau \text{ bijective}}} \operatorname{sgn}(\tau) \left( \prod_{i=1}^{k} B_{\sigma(\tau(i)),i} \right) (e_{\sigma(1)} \wedge \cdots \wedge e_{\sigma(k)} ) \\ &= \sum_{\substack{\sigma : [k] \to [n] \\ \sigma \text{ increasing} }} \det(B_{\sigma([k]),[k]}) (e_{\sigma(1)} \wedge \cdots \wedge e_{\sigma(k)} ) \end{align*} Plugginb this back, we therefore get \begin{align*} &\det(AB)(e_1 \wedge \cdots \wedge e_k) \\ &= \sum_{\substack{\sigma : [k] \to [n] \\ \sigma \text{ increasing} }} \det(B_{\sigma([k]),[k]}) [(Ae_{\sigma(1)}) \wedge \cdots \wedge (Ae_{\sigma(k)}) ] \\ &= \sum_{\substack{\sigma : [k] \to [n] \\ \sigma \text{ increasing} }} \det(B_{\sigma([k]),[k]}) \det(A_{[k],\sigma([k])}) (e_1 \wedge \cdots \wedge e_k) \\ &= \sum_{S \in \binom{[n]}{k}} \det(A_{[k],S}) \det(B_{S,[k]}) (e_1 \wedge \cdots \wedge e_k). \end{align*} This completes the proof. \square

2nd Proof. We specialize in the case F = \mathbb{C}. Also, for each index set I, we abbreviate \wedge_{i\in I} e_i = \wedge I. Then using inner product between multivectors, we get \begin{align*} \det(AB) &= \left\langle (\wedge^k A^*)(\wedge [k]), (\wedge^k B)(\wedge [k]) \right\rangle \\ &= \sum_{S \in \binom{[n]}{k}} \left\langle (\wedge^k A^*)(\wedge [k]), \wedge S \right\rangle \left\langle \wedge S, (\wedge^k B)(\wedge [k]) \right\rangle \\ &= \sum_{S \in \binom{[n]}{k}} \det(A_{[k],S})\det(B_{S,[k]}) \end{align*}

A similar technique shows:

Proposition. (Coefficients of a characteristic Polynomial) Let A be an n\times n matrix. Then \det(zI_n + A) = \sum_{k=0}^{n} z^{n-k} \sum_{S\subseteq\binom{[n]}{k}} \det(A_{S,S}).

Proposition. Let a_1, \ldots, a_p \in \mathbb{C} and v_1, \ldots, v_p \in \mathbb{C}^n. Then \det\left( \sum_{j=1}^{p} a_j v_j v_j^{*} \right) = \sum_{\{j_1 \lt \ldots \lt j_n \} \subseteq [p]} (a_{j_1} \cdots a_{j_n}) \left| \det( v_{j_1}, \ldots, v_{j_n} ) \right|^2.

Determinant of a Cauchy matrix

Throughout this posting, the expressions \sum_{\mathit{vars} \mathop{:} \mathit{conds}} f(\mathit{vars}) \qquad\text{and}\qquad \prod_{\mathit{vars} \mathop{:} \mathit{conds}} f(\mathit{vars}) will refer to the sum and product of f(\mathit{vars}) over all possible values of \mathit{vars} subject to the constraints \mathit{conds}, respectively.

Theorem. (Determinant of a Cauchy matrix) Let A be an n\times n matrix with elements a_{ij} in the form a_{ij} = \frac{1}{x_i + y_j}. (Think of A as an matrix over the field of rational functions in variables x_i's and y_j's.) Then \det A = \frac{\prod_{i, j \mathbin{:} i \lt j} (x_i - x_j)(y_i - y_j)}{\prod_{i, j} (x_i + y_j)}.

Proof. We prove the claim by induction on n. The base case is clearly true, so it suffices to establish the inductive step.

Suppose that the claim holds for all (n-1)\times(n-1) matrices. Also, recall that if p(z) is a polynomial of degree less than n and if \alpha_1, \ldots, \alpha_n are distinct, then we have the partial fraction decomposition \frac{p(z)}{\prod_{j} (z - \alpha_j)} = \sum_{k} \frac{p(\alpha_k)}{(z - \alpha_k) \prod_{j \mathbin{:} j \neq k} (\alpha_k - \alpha_j)}. So, by regarding x_n as the variable and applying the partial fraction decomposition, \begin{align*} &\frac{\prod_{i \mathbin{:} i \lt n} (x_i - x_n)}{\prod_{j} (x_n + y_j)} \\ &= \sum_{k} \frac{\prod_{i \mathbin{:} i \lt n} (x_i + y_k)}{(x_n + y_k) \prod_{j \mathbin{:} j \neq k} (y_j - y_k)} \\ &= \sum_{k} (-1)^{n-k} \frac{\prod_{i \mathbin{:} i \lt n} (x_i + y_k)}{(x_n + y_k) \bigl[ \prod_{j \mathbin{:} j \lt k} (y_j - y_k) \bigr]\bigl[ \prod_{j \mathbin{:} k \lt j} (y_k - y_j) \bigr]} \end{align*} From this, it follows that \begin{align*} &\frac{\prod_{i, j \mathbin{:} i \lt j} (x_i - x_j)(y_i - y_j)}{\prod_{i, j} (x_i + y_j)} \\ &= \frac{\bigl[ \prod_{i, j \mathbin{:} i \lt j \lt n} (x_i - x_j) \bigr]\bigl[ \prod_{i, j \mathbin{:} i \lt j} (y_i - y_j) \bigr]}{\prod_{i, j \mathbin{:} i \lt n} (x_i + y_j)} \cdot \frac{\prod_{i \mathbin{:} i \lt n} (x_i - x_n)}{\prod_{j} (x_n + y_j)} \\ &= \sum_{k} \frac{(-1)^{n-k}}{x_n + y_k} \cdot \frac{\bigl[ \prod_{i, j \mathbin{:} i \lt j \lt n} (x_i - x_j) \bigr]\bigl[ \prod_{i, j \mathbin{:} i \lt j; i \neq k; j \neq k} (y_i - y_j) \bigr]}{\prod_{i, j \mathbin{:} i \lt n; j\neq k} (x_i + y_j)} \end{align*} However, the last line is precisely the cofactor expansion of \det A along the nth row, where the determinants of the submatrices are computed using the inductive hypothesis. Therefore the inductive step is established and the proof is done. \square