Processing math: 0%

Tuesday, September 12, 2017

Euler's transformation - series acceleration

Theorem. Assume that (an)n0 satisfies lim sup. Then for x\in[0,1) we have \sum_{n=0}^{\infty} (-1)^n a_n x^n = \frac{1}{1+x} \sum_{n=0}^{\infty} (-1)^n \Delta^n a_0 \left( \frac{x}{1+x}\right)^n . \tag{*} Here, both sides converges absolutely and \Delta^n is the n-fold forward difference defined by \Delta^n a_j = \sum_{k=0}^{n} \binom{n}{k} (-1)^{n-k} a_{j+k}.

Proof. There is nothing to prove if x = 0, so we assume throughout that 0 < x < 1. First, absolute convergence of the LHS of \text{(*)} is straightforward. To verify absolute convergence of the RHS, we first note that there exist C and \beta such that 1 < \beta < \frac{x+1}{2x} and |a_n| \leq C \beta^n. Then \frac{1}{2^n} |\Delta^n a_j| \leq \max_{0\leq k \leq n} |a_{j+n}| \leq C\beta^{j+k}. \tag{1} Then absolute convergence of the RHS of \text{(*)} follows from comparison together with the following estimate \left| (-1)^n \Delta^n a_0 \left( \frac{x}{1+x}\right)^n \right| \leq C \left( \frac{2\beta x}{1+x}\right)^n. Next we show the equality \text{(*)}. We write \begin{align*} \sum_{n=0}^{\infty} (-1)^n a_n x^n &= \sum_{n=0}^{\infty} (-1)^n a_0 x^n + \sum_{n=0}^{\infty} (-1)^n (a_n - a_0) x^n \\ &= \frac{a_0}{1+x} - x \sum_{n=0}^{\infty} (-1)^n (a_{n+1} - a_0) x^n. \end{align*} The latter sum can be computed as \begin{align*} \sum_{n=0}^{\infty} (-1)^n (a_{n+1} - a_0) x^n &= \sum_{n=0}^{\infty} (-1)^n \left( \sum_{l=0}^{\infty} \Delta a_l \right) x^n \\ &= \sum_{l=0}^{\infty} \Delta a_l \left( \sum_{n=l}^{\infty} (-1)^n x^n \right) \\ &= \frac{1}{1+x} \sum_{l=0}^{\infty} (-1)^l \Delta a_l x^l. \end{align*} Here, interchanging the order of summations is justified by the absolute convergence. So we obtain \sum_{n=0}^{\infty} (-1)^n a_n x^n = \frac{a_0}{1+x} - \frac{x}{1+x} \sum_{l=0}^{\infty} (-1)^l \Delta a_l x^l. \tag{2} Since we have \limsup_{n\to\infty} |\Delta a_n|^{1/n} \leq 1, we can recursively apply \text{(2)} to obtain \begin{split} \sum_{n=0}^{\infty} (-1)^n a_n x^n &= \frac{1}{1+x} \sum_{n=0}^{N-1} (-1)^n \Delta^n a_0 \left( \frac{x}{1+x}\right)^n \\ &\qquad + \left( -\frac{x}{1+x} \right)^N \sum_{l=0}^{\infty} (-1)^l \Delta^N a_l x^l. \end{split} \tag{3} But by the estimate \text{(1)}, we have \left| \left( -\frac{x}{1+x} \right)^N \sum_{l=0}^{\infty} (-1)^l \Delta^N a_l x^l \right| \leq \left( \frac{2\beta x}{1+x} \right)^N \sum_{l=0}^{\infty} C(\beta x)^{l}. Since \beta x < \frac{2\beta x}{1+x} < 1, this bound vanishes as N \to \infty. Therefore we obtain the desired identity. ////

Example. Let f : [0,\infty) \to \mathbb{R} be a completely monotone function and set a_n = f(n). We know that f admits the representation f(s) = \int_{[0,\infty)} e^{-st} \, \mu(dt) for some finite Borel measure \mu on [0,\infty). Then a simple computation shows that (-1)^n \Delta^n a_0 = \int_{[0,\infty)} (1 - e^{-t})^n \, \mu(dt). This sequence is non-negative and decreases to 0 as n\to\infty. So the limiting form of \text{(*)} is available and we obtain \sum_{n=0}^{\infty} (-1)^n a_n = \sum_{n=0}^{\infty} \frac{(-1)^n \Delta^n a_0}{2^{n+1}}. Notice that the resulting transformation converges exponentially fast regardless of how slow the original series was. As a useful example, consider \mu(dt) = t^{p-1}e^{-t}/\Gamma(p) for p > 0. Then we have the following series acceleration for the Dirichlet eta function: \sum_{n=0}^{\infty} \frac{(-1)^n}{(n+1)^p} = \sum_{n=0}^{\infty} \frac{1}{2^{n+1}} \sum_{k=0}^{n} \binom{n}{k} \frac{(-1)^{k}}{(k+1)^p}. Of course this can be used to compute the Riemann zeta function.

Monday, August 21, 2017

An easy exercise on continued fractions

1. Basic theory of continued fractions

Let (a_n)_{n\geq0} and (b_n)_{n\geq1} be such that a_n, b_n > 0. (By convention, we always set b_{0} = 1.) If we define 2\times 2 matrices (P_n)_{n\geq-1} by

P_n = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} \begin{pmatrix} 0 & 1 \\ 1 & a_0 \end{pmatrix} \begin{pmatrix} 0 & b_1 \\ 1 & a_1 \end{pmatrix} \cdots \begin{pmatrix} 0 & b_n \\ 1 & a_n \end{pmatrix}

then it can be written in the form

P_n = \begin{pmatrix} p_{n-1} & p_n \\ q_{n-1} & q_n \end{pmatrix}

where (p_n) and (q_n) solves the following recurrence relation

\begin{cases} p_n = a_n p_{n-1} + b_n p_{n-2}, & p_{-2} = 0, p_{-1} = 1 \\ q_n = a_n q_{n-1} + b_n q_{n-2}, & p_{-2} = 1, p_{-1} = 0 \end{cases}

Using the theory of fractional linear transformation, we find that

\frac{p_n}{q_n} = a_0 + \mathop{\vcenter{\Large\mathrm{K}}}_{i=1}^{n} \frac{b_i}{a_i} = a_0 + \dfrac{b_1}{a_1 + \dfrac{b_2 }{a_2 + \dfrac{\ddots }{\ddots \dfrac{b_{n-1}}{a_{n-1} + \dfrac{b_n}{a_n} } }}}

where the right-hand side is the Gauss' Kettenbruch notation for continued fractions. Taking determinant to P_n and simplifying a little bit, we also obtain

\frac{p_n}{q_n} = a_0 + \sum_{i=1}^{n} (-1)^{i-1} \frac{b_1 \cdots b_i}{q_iq_{i-1}}

which is often useful for establishing the convergence of the infinite continued fraction.

2. Computing some continued fractions

Let (p_n) and (q_n) be as before. Assume that p_n/q_n converges and that we can find a sequence (r_n) of positive reals such that \sum_{n=0}^{\infty} \frac{q_n}{r_n} x^n converges for x \in [0, 1) and diverges at x = 1. Then we can compute the limit of p_n/q_n through the following averaging argument:

a_0 + \mathop{\vcenter{\Large\mathrm{K}}}_{i=1}^{\infty} \frac{b_i}{a_i} = \lim_{n\to\infty} \frac{p_n}{q_n} = \lim_{n\to\infty} \frac{p_n / r_n}{q_n / r_n} = \lim_{x \to 1^-} \frac{\sum_{n=0}^{\infty} \frac{p_n}{r_n} x^n}{\sum_{n=0}^{\infty} \frac{q_n}{r_n} x^n}.

We give some examples to which this technique applies.

Example 1. As the first example, we consider the following identity

\dfrac{1}{1 + \dfrac{2}{2 + \dfrac{3}{3 + \ddots}}} = \mathop{\vcenter{\Large\mathrm{K}}}_{n=1}^{\infty} \frac{n}{n} = \frac{1}{e-1}.

In this case, it turns out that we can choose r_n = n!. Indeed, assume that (c_n) solves the recurrence relation c_n = n c_{n-1} + n c_{n-2}. Then the exponential generating function

y(x) = \sum_{n=0}^{\infty} \frac{c_n}{n!} x^n

solves the initial value problem

y' + \frac{x^2+1}{x(x-1)} y = \frac{c_0}{x(x-1)}, \qquad y(0) = c_0.

The solution is easily found by the integrating factor method.

y(x) = \frac{c_0 + (c_1 - 2c_0)x e^{-x}}{(x-1)^2}.

Plugging c_n = p_n and c_n = q_n, we obtain

\mathop{\vcenter{\Large\mathrm{K}}}_{n=1}^{\infty} \frac{n}{n} = \lim_{x \to 1^-} \frac{p_0 + (p_1 - 2p_0)x e^{-x}}{q_0 + (q_1 - 2q_0)x e^{-x}} = \frac{e^{-1}}{1 - e^{-1}} = \frac{1}{e-1}

as desired.

Example 2. The exponential generating function y(x) of c_n = nc_{n-1} + nc_{n-2} solves the equation

(x-1)y'' + 2y' + y = 0.

Any solution of this equation is of the form

y(x) = \frac{\alpha I_1(2\sqrt{1-x}) + \beta K_1(2\sqrt{1-x})}{\sqrt{1-x}},

where I_1 and K_1 are the modified Bessel functions of order 1. From this we deduce that

\dfrac{1}{1 + \dfrac{1}{2 + \dfrac{1}{3 + \ddots}}} = \mathop{\vcenter{\Large\mathrm{K}}}_{n=1}^{\infty} \frac{1}{n} = \lim_{x \to 1^-} \frac{I_1(2) K_1(2\sqrt{1-x}) - K_1(2) I_1(2\sqrt{1-x})}{I_0(2) K_1(2\sqrt{1-x}) + K_0(2) I_1(2\sqrt{1-x})} = \frac{I_1(2)}{I_0(2)}.

Example 3. Consider the Rogers-Ramanujan continued fraction

\dfrac{1}{1 + \dfrac{q}{1 + \dfrac{q^2}{1 + \ddots}}} = \mathop{\vcenter{\Large\mathrm{K}}}_{n=1}^{\infty} \frac{q^{n-1}}{1}.

Adopting the same strategy, the generating function y(x) = \sum_{n=0}^{\infty} c_n x^n for the recurrence relation c_n = c_{n-1} + q^{n-1} c_{n-2} satisfies

y(x) = c_0 + \frac{c_1 x}{1-x} + \frac{qx^2}{1-x} y(qx).

Let y_0 be the solution for (c_0, c_1) = (0, 1) and y_1 the solution for (c_0, c_1) = (1, 1). Then it follows that both y_0 and y_1 has simple pole at x = 1 and thus

\mathop{\vcenter{\Large\mathrm{K}}}_{n=1}^{\infty} \frac{q^{n-1}}{1} = \frac{1 + q y_0(q)}{1 + q y_1(q)}.

Iterating the functional equation for y_0 and y_1, we find that

1 + qy_0(q) = \sum_{n=0}^{\infty} \frac{q^{n(n+1)}}{(q;q)_n}, \qquad 1 + qy_1(q) = \sum_{n=0}^{\infty} \frac{q^{n^2}}{(q;q)_n}.

Rogers-Ramanujan identities tell that these can be represented in terms of infinite q-Pochhammer symbol.

Sunday, February 12, 2017

A simple regulariation

(This is a slight modification of my old post.)

Proposition. Let f : (0,\infty) \to \Bbb{C} be a locally integrable function that satisfies the following conditions:

  1. For each \epsilon > 0 the following limit converges: I(\epsilon) = \lim_{R\to\infty} \int_{\epsilon}^{R} \frac{f(x)}{x} \, \mathrm{d}x
  2. There exist constants m = m(f) and c = c(f) such that I(\epsilon) = -m\log\epsilon + c + o(1) \qquad \text{as }\epsilon \to 0^+.
Then the Laplace transform \mathcal{L}f(x) is a well-defined continuous function on (0,\infty) and we have c = \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \left( \int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s - m \log R \right) - m\gamma, \tag{1} where \gamma is the Euler-Mascheroni constant.

Remark. Before the actual proof, we make some remarks.

  1. If in addition that \lim_{x\to 0^+} f(x) converges, then this limit is exactly m(f): m(f) = \lim_{x\to 0^+} f(x) This renders m(f) uninteresting in most applications.
  2. c(f) can be represented by the integral \begin{align*} c(f) &= \int_{0}^{\infty} \frac{f(x) - m(f) \mathbf{1}_{(0,1)}(x)}{x} \, \mathrm{d}x \\ &= \int_{0}^{1} \frac{f(x) - m(f)}{x} \, \mathrm{d}x + \int_{1}^{\infty} \frac{f(x)}{x} \, \mathrm{d}x. \end{align*}
  3. If we impose a strong assumption on f, the proof greatly simplifies. The following lengthy proof is only meaningful when a full generality is needed.

Proof. Let g : (0,\infty) \to \Bbb{C} and J : (0,\infty) \to \Bbb{C} be defined by g(x) = \frac{f(x) - m\mathbf{1}_{(0,1)}(x)}{x}, \qquad J(\epsilon) = \lim_{R\to\infty} \int_{\epsilon}^{R} g(x) \, \mathrm{d}x. By the assumption, J is well-defined and satisfies c = \lim_{x \to 0^+} J(x), \qquad 0 = \lim_{x\to\infty} J(x). In particular, J extends to a continuous function on [0,\infty] and hence bounded. Now using the identity J'(x) = -g(x) on (0,\infty), for 0 \lt \epsilon \lt 1 \lt R we have \begin{align*} &\int_{\epsilon}^{R} f(x) e^{-sx} \, \mathrm{d}x \\ &= m \int_{\epsilon}^{1} e^{-sx} \, \mathrm{d}x - \int_{\epsilon}^{R} x J'(x) e^{-sx} \, \mathrm{d}x \\ &= m \int_{\epsilon}^{1} e^{-sx} \, \mathrm{d}x - \left[ xJ(x)e^{-sx} \right]_{\epsilon}^{R} + \int_{\epsilon}^{R} J(x)(1 - sx)e^{-sx} \, \mathrm{d}x. \end{align*} For s \gt 0, taking \epsilon \to 0^+ and R \to \infty shows that the expression above converges to \begin{align*} \mathcal{L}f(s) &:= \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \int_{\epsilon}^{R} f(x) e^{-sx} \, \mathrm{d}x \\ &= m\cdot\frac{1 - e^{-s}}{s} + \int_{0}^{\infty} J(x)(1 - sx)e^{-sx} \, \mathrm{d}x. \end{align*} An easy but important remark is that the last integral converges absolutely for s \gt 0. Now by the simple estimate \int_{\epsilon}^{R} \int_{0}^{\infty} (sx)^\alpha e^{-sx} \, \mathrm{d}x \, \mathrm{d}s = \int_{\epsilon}^{R} \frac{\Gamma(\alpha+1)}{s} \, \mathrm{d}s \lt \infty that holds true for \alpha \gt -1, we can apply Fubini's theorem to write \begin{align*} &\int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s \\ &= m \int_{\epsilon}^{R} \frac{1 - e^{-s}}{s} \, \mathrm{d}s + \int_{0}^{\infty} J(x) \left( \int_{\epsilon}^{R} (1 - sx)e^{-sx} \, \mathrm{d}s \right) \, \mathrm{d}x \\ &= m \left[ (1-e^{-s})\log s \right]_{\epsilon}^{R} - m \int_{\epsilon}^{R} e^{-s}\log s \, \mathrm{d}s \\ &\qquad + \int_{0}^{\infty} J(x) \left( Re^{-Rx} - \epsilon e^{-\epsilon x} \right) \, \mathrm{d}x. \end{align*} From this computation, it follows that \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \left( \int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s - m\log R \right) = m \gamma + c. This proves \text{(1)} as expected. ////

In many applications, the logarithmic term is cancelled out and thus the primary interest lies in the value of c = c(f). An easy observation is that both m and c are linear. A less obvious properties are summarized in the following table.

Transformation Relation Conditions
g(x) = f(x^p) c(g) = \frac{1}{p}c(f) p > 0
g(x) = f(px) c(g) = c(f) - m(f)\log p p > 0
g(x) = f(x)e^{-\alpha x} c(g) = c(f) - \int_{0}^{\alpha} \mathcal{L}f(s) \, ds \Re(\alpha) > 0

It is worth it noting that m(g) = m(f) in all of the transformations listed above. Next, the following table summaries some well-known values of c.

Function f(x) Value of m(f) Value of c(f) Conditions
e^{-x} f(0) = 1 -\gamma
\cos x f(0) = 1 -\gamma
\dfrac{1}{(1+x)^p} f(0) = 1 -H_{p-1} p > 0
\dfrac{x}{e^x - 1} f(0) = 1 0

Here are some easy examples.

Example 1. Let p, q \gt 0. Then \begin{align*} &\int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, \mathrm{d}x \\ &= c\{\cos(x^p) - \exp(-x^q)\} = \frac{1}{p} c\{\cos x\} - \frac{1}{q} c\{e^{-x}\} \\ & = \gamma \left( \frac{1}{q} - \frac{1}{p} \right). \end{align*}

Example 2. Let \alpha, \beta \gt 0. Then \begin{align*} &\int_{0}^{\infty} \frac{1}{x} \left( \frac{1}{1 + \alpha^2 x^2} - \cos (\beta x) \right) \, \mathrm{d}x \\ &= c\left\{ \frac{1}{1 + \alpha^2 x^2} - \cos (\beta x) \right\} \\ &= \frac{1}{2} c\left\{ \frac{1}{1 + x} \right\} - c \{\cos x\} - \log \left( \frac{\alpha}{\beta} \right) \\ &= \gamma - \log \left( \frac{\alpha}{\beta} \right). \end{align*}

Example 3. The Laplace transform of the Bessel function J_0 of the first kind and order 0 is given by[1] \mathcal{L} \{J_0\} (s) = \frac{1}{\sqrt{s^2 + 1}} Using this and m(J_0) = J_0(0) = 1, we find that \begin{align*} c(J_0) &= \lim_{R \to \infty} \left( \int_{0}^{R} \mathcal{L} \{J_0\}(s) \, \mathrm{d}s - \log R \right) - \gamma \\ &= \lim_{R \to \infty} \left( \operatorname{arsinh}(R) - \log R \right) - \gamma \\ &= \log 2 - \gamma. \end{align*} This also shows \int_{0}^{\infty} \frac{J_0(x) - \mathbf{1}_{(0,1)}(x)}{x} \, \mathrm{d}x = \log 2 - \gamma.

Wednesday, February 1, 2017

Fundamental Theorem of Calculus

Theorem. Assume that f : [a, b] \to \Bbb{R} is differentiable on [a, b] and f' is in L^1. Then \int_{a}^{b} f'(x) \, dx = f(b) - f(a).

Remark. This proof is a slightly simplified version of the proof of Theorem 7.21 in Rudin's Real and Complex Analysis, 3rd edition.

Proof. Let l be a lower-semicontinuous function on [a, b] such that l(x) > f'(x) for all x \in [a, b]. Define G : [a, b] \to \Bbb{R} by G(x) = \int_{a}^{x} l(t) \, dt - [f(x) - f(a)]. Then for each x \in [a, b), we have \begin{align*} \frac{G(x+h) - G(x)}{h} \geq \left( \inf_{t \in [x,x+h]}l(t) \right) - \frac{f(x+h) - f(x)}{h} \end{align*} and thus \begin{align*} \liminf_{h\to 0^+} \frac{G(x+h) - G(x)}{h} &\geq \liminf_{h\to 0^+} l(x+h) - f'(x) \\ &\geq l(x) - f'(x) \\ &> 0. \end{align*} This shows that G is increasing on [a, b) and by continuity, G(b) \geq G(a) = 0. From this, we have \int_{a}^{b} l(t) \, dt \geq f(b) - f(a). By the Vitali-Caratheodory theorem, f' can be approximated from above by lower-semicontinuous functions in L^1. Thus it follows that \int_{a}^{b} f'(t) \, dt \geq f(b) - f(a). Replacing f by -f proves the other direction and hence the claim follows. ////

Sunday, January 1, 2017

Glasser's master theorem

In this posting we discuss a family of measure-preserving transforms on \mathbb{R}.

1. Glasser's master theorem

There is a complete characterization of rational functions which preserve the Lebesgue measure on \mathbb{R}. This appears at least as early as in Pólya and Szegö's book [3] in 1972, but seems more widely known as Glasser's master theorem since his 1983 paper [1]. Here we introduce a short proof of a sufficient condition.

Theorem. (Glasser, 1983) Let a_1 < a_2 < \cdots < a_n and \alpha be real numbers and c_1, \cdots, c_n be positive real numbers. Then the function \phi(x) = x - \alpha - \sum_{k=1}^{n} \frac{c_k}{x - a_k} preserves the Lebesgue measure on \mathbb{R}. In particular, for any Lebesgue-integrable function f on \mathbb{R} we have \int_{\mathbb{R}} f(\phi(x)) \, dx = \int_{\mathbb{R}} f(x) \, dx.

Proof. Let I_k = (a_k, a_{k+1}) for k = 0, \cdots, n with the convention a_0 = -\infty and a_{n+1} = \infty. Then by a direct computation, \phi'(x) > 1 on \mathbb{R} \setminus \{a_1, \cdots, a_n\}. Moreover, we have \phi(x) \to +\infty \quad \text{as} \quad x \to a_k^-, \qquad k = 1, \cdots, n+1 and similarly \phi(x) \to -\infty \quad \text{as} \quad x \to a_k^+, \qquad k = 0, \cdots, n. This implies that \phi is a bijection from I_k to \mathbb{R} for each k = 0, \cdots, n. Let \psi_k : I_k \to \mathbb{R} be the inverse of the restriction \phi|_{I_k} for each k = 0, \cdots, n, i.e., \phi \circ \psi_k = \mathrm{id}_{\mathbb{R}}. Then for each y \in \mathbb{R}, the equation \phi(x) = y has exactly n+1 zeros \psi_0(y), \cdots, \psi_n(y). Now multiplying both sides of this equation by (x-a_1)\cdots(x-a_n), we obtain (x-\alpha-y)(x-a_1)\cdots(x-a_n) + \text{[polynomial in $x$ of degree $\leq n-1$]} = 0 Since the left-hand side is a polynomial of degree n+1, it follows that the left-hand side coincides with (x-\psi_0(y))\cdots(x-\psi_n(y)). Then comparing the coefficient of n-th term shows that y+\alpha+a_1+\cdots+a_n = \psi_0(y)+\cdots+\psi_n(y). Therefore for each Lebesgue-measurable function f on \mathbb{R}, we have \int_{\mathbb{R}} f(\phi(x)) \, dx = \sum_{k=0}^{n} \int_{I_k} f(\phi(x)) \, dx = \sum_{k=0}^{n} \int_{\mathbb{R}} f(y)\psi'_k(y) \, dy = \int_{\mathbb{R}} f(y) \, dy and the proof is complete. ////

2. Examples

Example 1. The theorem above provides an easy way of computing the following integral: for \alpha > 0, \begin{align*} \int_{0}^{\infty} \exp\left\{ -x^2 - \frac{\alpha}{x^2} \right\} \, dx &= \frac{1}{2} \int_{-\infty}^{\infty} \exp\left\{ -x^2 - \frac{\alpha}{x^2} \right\} \, dx \\ &= \frac{1}{2} \int_{-\infty}^{\infty} \exp\left\{ -\left(x - \frac{\sqrt{\alpha}}{x} \right)^2 - 2\sqrt{\alpha} \right\} \, dx \\ &= \frac{1}{2} \int_{-\infty}^{\infty} \exp\{-x^2 - 2\sqrt{\alpha} \} \, dx \\ &= \frac{\sqrt{\pi}}{2} \exp\{-2\sqrt{\alpha} \}. \end{align*}

Example 2. (Lévy distribution) Let c > 0 and define f : (0, \infty) \to \mathbb{R} by f(x) = \sqrt{\frac{c}{2\pi}} x^{-3/2} \exp\left\{-\frac{c}{2x}\right\}. Its (probabilists') Fourier transform is defined as \phi(t) = \int_{0}^{\infty} e^{itx}f(x) \, dx. Since f is integrable, \phi extends to a complex function which is holomorphic on the upper-half plane \mathbb{H} = \{z \in \mathbb{C} : \operatorname{Im}(z) > 0 \} and continuous on \bar{\mathbb{H}}. Now for s > 0, we have \begin{align*} \phi(is) &= \sqrt{\frac{c}{2\pi}} \int_{0}^{\infty} x^{-3/2} \exp\left\{-\frac{c}{2x}-sx\right\} \, dx \\ &= \frac{2}{\sqrt{\pi}} \int_{0}^{\infty} \exp\left\{-u^2-\frac{cs}{2u^2}\right\} \, du, \qquad (x = (c/2)u^{-2}) \\ &= \exp\{-\sqrt{2cs}\}. \end{align*} From this, we know that \phi(t) = \exp\{-\sqrt{-2ict}\} initially along the imaginary axis with \Im(t) > 0. Since both sides extend to holomorphic functions on all of \mathbb{H}, this identity remains true on \mathbb{H} by the principle of analytic continuation. Then by the continuity, this is also true for t \in \mathbb{R}.

3. Generalization

The following theorem generalizes the Glasser's master theorem. It tells that certain family of Nevanlinna functions gives rise to measure-preserving transformations on \mathbb{R} = \partial \overline{\mathbb{H}}.

Theorem. (Letac, 1977) Let \alpha be a real number and \mu be a measure on \mathbb{R} which is singular to the Lebesgue measure and satisfies \int_{\mathbb{R}} \frac{\mu(d\lambda)}{1+\lambda^2} < \infty. Then the function \phi(x) = x - \alpha - \lim_{\epsilon \to 0^+} \int_{\mathbb{R}} \left( \frac{1}{x+i\epsilon - \lambda} + \frac{\lambda}{1+\lambda^2} \right) \, \mu(d\lambda) defines a measruable function on \mathbb{R} that preserves the Lebesgue measure on \mathbb{R}.

If \mu is a finite sum of point masses, it reduces to the previous theorem.

References

  • [1] Glasser, M. L. "A Remarkable Property of Definite Integrals." Mathematics of Computation 40, no. 162 (1983): 561-63. doi:10.2307/2007531.
  • [2] Letac, Gérard. "Which Functions Preserve Cauchy Laws?" Proceedings of the American Mathematical Society 67, no. 2 (1977): 277-86. doi:10.2307/2041287.
  • [3] G. Pólya and G. Szegö. "Problems and Theorems in Analysis" I, II, Problem 118.1 Springer-Verlag, Berlin and New York (1972).