Processing math: 1%

Sunday, February 12, 2017

A simple regulariation

(This is a slight modification of my old post.)

Proposition. Let f:(0,)C be a locally integrable function that satisfies the following conditions:

  1. For each ϵ>0 the following limit converges: I(ϵ)=lim
  2. There exist constants m = m(f) and c = c(f) such that I(\epsilon) = -m\log\epsilon + c + o(1) \qquad \text{as }\epsilon \to 0^+.
Then the Laplace transform \mathcal{L}f(x) is a well-defined continuous function on (0,\infty) and we have c = \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \left( \int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s - m \log R \right) - m\gamma, \tag{1} where \gamma is the Euler-Mascheroni constant.

Remark. Before the actual proof, we make some remarks.

  1. If in addition that \lim_{x\to 0^+} f(x) converges, then this limit is exactly m(f): m(f) = \lim_{x\to 0^+} f(x) This renders m(f) uninteresting in most applications.
  2. c(f) can be represented by the integral \begin{align*} c(f) &= \int_{0}^{\infty} \frac{f(x) - m(f) \mathbf{1}_{(0,1)}(x)}{x} \, \mathrm{d}x \\ &= \int_{0}^{1} \frac{f(x) - m(f)}{x} \, \mathrm{d}x + \int_{1}^{\infty} \frac{f(x)}{x} \, \mathrm{d}x. \end{align*}
  3. If we impose a strong assumption on f, the proof greatly simplifies. The following lengthy proof is only meaningful when a full generality is needed.

Proof. Let g : (0,\infty) \to \Bbb{C} and J : (0,\infty) \to \Bbb{C} be defined by g(x) = \frac{f(x) - m\mathbf{1}_{(0,1)}(x)}{x}, \qquad J(\epsilon) = \lim_{R\to\infty} \int_{\epsilon}^{R} g(x) \, \mathrm{d}x. By the assumption, J is well-defined and satisfies c = \lim_{x \to 0^+} J(x), \qquad 0 = \lim_{x\to\infty} J(x). In particular, J extends to a continuous function on [0,\infty] and hence bounded. Now using the identity J'(x) = -g(x) on (0,\infty), for 0 \lt \epsilon \lt 1 \lt R we have \begin{align*} &\int_{\epsilon}^{R} f(x) e^{-sx} \, \mathrm{d}x \\ &= m \int_{\epsilon}^{1} e^{-sx} \, \mathrm{d}x - \int_{\epsilon}^{R} x J'(x) e^{-sx} \, \mathrm{d}x \\ &= m \int_{\epsilon}^{1} e^{-sx} \, \mathrm{d}x - \left[ xJ(x)e^{-sx} \right]_{\epsilon}^{R} + \int_{\epsilon}^{R} J(x)(1 - sx)e^{-sx} \, \mathrm{d}x. \end{align*} For s \gt 0, taking \epsilon \to 0^+ and R \to \infty shows that the expression above converges to \begin{align*} \mathcal{L}f(s) &:= \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \int_{\epsilon}^{R} f(x) e^{-sx} \, \mathrm{d}x \\ &= m\cdot\frac{1 - e^{-s}}{s} + \int_{0}^{\infty} J(x)(1 - sx)e^{-sx} \, \mathrm{d}x. \end{align*} An easy but important remark is that the last integral converges absolutely for s \gt 0. Now by the simple estimate \int_{\epsilon}^{R} \int_{0}^{\infty} (sx)^\alpha e^{-sx} \, \mathrm{d}x \, \mathrm{d}s = \int_{\epsilon}^{R} \frac{\Gamma(\alpha+1)}{s} \, \mathrm{d}s \lt \infty that holds true for \alpha \gt -1, we can apply Fubini's theorem to write \begin{align*} &\int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s \\ &= m \int_{\epsilon}^{R} \frac{1 - e^{-s}}{s} \, \mathrm{d}s + \int_{0}^{\infty} J(x) \left( \int_{\epsilon}^{R} (1 - sx)e^{-sx} \, \mathrm{d}s \right) \, \mathrm{d}x \\ &= m \left[ (1-e^{-s})\log s \right]_{\epsilon}^{R} - m \int_{\epsilon}^{R} e^{-s}\log s \, \mathrm{d}s \\ &\qquad + \int_{0}^{\infty} J(x) \left( Re^{-Rx} - \epsilon e^{-\epsilon x} \right) \, \mathrm{d}x. \end{align*} From this computation, it follows that \lim_{\substack{\epsilon & \to 0^+ \\ R&\to\infty}} \left( \int_{\epsilon}^{R} \mathcal{L}f(s) \, \mathrm{d}s - m\log R \right) = m \gamma + c. This proves \text{(1)} as expected. ////

In many applications, the logarithmic term is cancelled out and thus the primary interest lies in the value of c = c(f). An easy observation is that both m and c are linear. A less obvious properties are summarized in the following table.

Transformation Relation Conditions
g(x) = f(x^p) c(g) = \frac{1}{p}c(f) p > 0
g(x) = f(px) c(g) = c(f) - m(f)\log p p > 0
g(x) = f(x)e^{-\alpha x} c(g) = c(f) - \int_{0}^{\alpha} \mathcal{L}f(s) \, ds \Re(\alpha) > 0

It is worth it noting that m(g) = m(f) in all of the transformations listed above. Next, the following table summaries some well-known values of c.

Function f(x) Value of m(f) Value of c(f) Conditions
e^{-x} f(0) = 1 -\gamma
\cos x f(0) = 1 -\gamma
\dfrac{1}{(1+x)^p} f(0) = 1 -H_{p-1} p > 0
\dfrac{x}{e^x - 1} f(0) = 1 0

Here are some easy examples.

Example 1. Let p, q \gt 0. Then \begin{align*} &\int_{0}^{\infty} \frac{\cos (x^p) - \exp(-x^q)}{x} \, \mathrm{d}x \\ &= c\{\cos(x^p) - \exp(-x^q)\} = \frac{1}{p} c\{\cos x\} - \frac{1}{q} c\{e^{-x}\} \\ & = \gamma \left( \frac{1}{q} - \frac{1}{p} \right). \end{align*}

Example 2. Let \alpha, \beta \gt 0. Then \begin{align*} &\int_{0}^{\infty} \frac{1}{x} \left( \frac{1}{1 + \alpha^2 x^2} - \cos (\beta x) \right) \, \mathrm{d}x \\ &= c\left\{ \frac{1}{1 + \alpha^2 x^2} - \cos (\beta x) \right\} \\ &= \frac{1}{2} c\left\{ \frac{1}{1 + x} \right\} - c \{\cos x\} - \log \left( \frac{\alpha}{\beta} \right) \\ &= \gamma - \log \left( \frac{\alpha}{\beta} \right). \end{align*}

Example 3. The Laplace transform of the Bessel function J_0 of the first kind and order 0 is given by[1] \mathcal{L} \{J_0\} (s) = \frac{1}{\sqrt{s^2 + 1}} Using this and m(J_0) = J_0(0) = 1, we find that \begin{align*} c(J_0) &= \lim_{R \to \infty} \left( \int_{0}^{R} \mathcal{L} \{J_0\}(s) \, \mathrm{d}s - \log R \right) - \gamma \\ &= \lim_{R \to \infty} \left( \operatorname{arsinh}(R) - \log R \right) - \gamma \\ &= \log 2 - \gamma. \end{align*} This also shows \int_{0}^{\infty} \frac{J_0(x) - \mathbf{1}_{(0,1)}(x)}{x} \, \mathrm{d}x = \log 2 - \gamma.

Wednesday, February 1, 2017

Fundamental Theorem of Calculus

Theorem. Assume that f : [a, b] \to \Bbb{R} is differentiable on [a, b] and f' is in L^1. Then \int_{a}^{b} f'(x) \, dx = f(b) - f(a).

Remark. This proof is a slightly simplified version of the proof of Theorem 7.21 in Rudin's Real and Complex Analysis, 3rd edition.

Proof. Let l be a lower-semicontinuous function on [a, b] such that l(x) > f'(x) for all x \in [a, b]. Define G : [a, b] \to \Bbb{R} by G(x) = \int_{a}^{x} l(t) \, dt - [f(x) - f(a)]. Then for each x \in [a, b), we have \begin{align*} \frac{G(x+h) - G(x)}{h} \geq \left( \inf_{t \in [x,x+h]}l(t) \right) - \frac{f(x+h) - f(x)}{h} \end{align*} and thus \begin{align*} \liminf_{h\to 0^+} \frac{G(x+h) - G(x)}{h} &\geq \liminf_{h\to 0^+} l(x+h) - f'(x) \\ &\geq l(x) - f'(x) \\ &> 0. \end{align*} This shows that G is increasing on [a, b) and by continuity, G(b) \geq G(a) = 0. From this, we have \int_{a}^{b} l(t) \, dt \geq f(b) - f(a). By the Vitali-Caratheodory theorem, f' can be approximated from above by lower-semicontinuous functions in L^1. Thus it follows that \int_{a}^{b} f'(t) \, dt \geq f(b) - f(a). Replacing f by -f proves the other direction and hence the claim follows. ////