Q
stringlengths 70
13.7k
| A
stringlengths 28
13.2k
| meta
dict |
---|---|---|
"Find $\lim_{n \to \infty}\frac{x_n}{\sqrt{n}}$ where $x_{n+1}=x_n+\frac{n}{x_1+x_2+\cdots+x_n}$" -where does this problem come from? Recently, I encountered this problem:
"Given a sequence of positive number $(x_n)$ such that for all $n$,
$$x_{n+1}=x_n+\frac{n}{x_1+x_2+\cdots+x_n}$$
Find the limit $\lim_{n \rightarrow \infty} \frac{x_n}{\sqrt{n}}.$"
As pointed out in the comment section of the following topic [1], this problem is indeed open. Out of curiosity, I wonder if this problems has a research-level meaning? For example, does it stem from an interesting system dynamic question?
I'm aware this question might not suitable for Mathoverflow forum, but aside from this site, I don't think there is a more suitable place to ask this question
| This is to prove the conjecture
\begin{equation*}
x_n\sim\sqrt3\,n^{1/2} \tag{1}\label{1}
\end{equation*}
(as $n\to\infty$).
(For all integers $n\ge1$,) we have
\begin{equation*}
h_n:=x_{n+1}-x_n=\frac n{s_n}, \tag{2}\label{2}
\end{equation*}
where
\begin{equation*}
s_n:=x_1+\cdots+x_n,
\end{equation*}
with $s_0:=0$.
Rewrite \eqref{2} as $s_{n+1}-2s_n+s_{n-1}=\dfrac n{s_n}$ and then as $s_{n+1}s_n-2s_n^2+s_{n-1}s_n=n$ and then as $s_{n+1}(s_{n+1}-x_{n+1})-2s_n^2+s_{n-1}(s_{n-1}+x_n)=n$ and then as $s_{n+1}^2-2s_n^2+s_{n-1}^2=n+s_{n+1}x_{n+1}-s_{n-1}x_n$. Note also that $s_{n+1}x_{n+1}-s_{n-1}x_n=s_n(x_{n+1}-x_n)+x_{n+1}^2+x_n^2=n+x_{n+1}^2+x_n^2$, by \eqref{2}. So,
\begin{equation*}
t_n:=s_{n+1}^2-2s_n^2+s_{n-1}^2=2n+x_{n+1}^2+x_n^2. \tag{3}\label{3}
\end{equation*}
It follows that
\begin{equation*}
t_n\ge2n. \tag{4}\label{4}
\end{equation*}
Suppose that
\begin{equation*}
t_n\gtrsim cn \tag{5}\label{5}
\end{equation*}
for some real $c>0$. As usual, for any two positive sequences $(a_n)$ and $(b_n)$, we write $a_n\lesssim b_n$ or, equivalently, $b_n\gtrsim a_n$ to mean $a_n\le(1+o(1))b_n$ -- so that $(a_n\lesssim b_n\ \&\ a_n\gtrsim b_n)\iff a_n\sim b_n\iff a_n=(1+o(1))b_n$.
By \eqref{3}, the $t_n$'s are the second (symmetric) differences of the $s_n^2$'s. So, by \eqref{5},
\begin{equation*}
s_n^2\gtrsim\frac c6\,n^3\quad\text{and hence}\quad s_n\gtrsim\sqrt{\frac c6}\,n^{3/2}. \tag{6}\label{6}
\end{equation*}
So, by \eqref{2},
\begin{equation*}
h_n\lesssim \sqrt{\frac6c}\,n^{-1/2} \quad\text{and hence}\quad
x_n\lesssim \sqrt{\frac6c}\,2n^{1/2}. \tag{7}\label{7}
\end{equation*}
So, by \eqref{3},
\begin{equation*}
t_n\lesssim 2n+2\frac6c\,4n=\Big(2+\frac{48}c\Big)n. \tag{8}\label{8}
\end{equation*}
So (cf. \eqref{6}),
\begin{equation*}
s_n^2\lesssim\Big(2+\frac{48}c\Big)\frac{n^3}6
=\Big(\frac13+\frac8c\Big)n^3
\quad\text{and hence}\quad
s_n\lesssim\sqrt{\frac13+\frac8c}\,n^{3/2}. \tag{9}\label{9}
\end{equation*}
So, by \eqref{2},
\begin{equation*}
h_n\gtrsim\frac1{\sqrt{\frac13+\frac8c}}\,n^{-1/2} \quad\text{and hence}\quad
x_n\gtrsim \frac2{\sqrt{\frac13+\frac8c}}\,n^{1/2}. \tag{10}\label{10}
\end{equation*}
So, by \eqref{3},
\begin{equation*}
t_n\gtrsim 2n+2\frac4{\frac13+\frac8c}n=f(c)n \tag{11}\label{11}
\end{equation*}
(whenever \eqref{5} holds), where
\begin{equation*}
f(c):=2+\frac8{\frac13+\frac8c}.
\end{equation*}
It follows from \eqref{4} that for all integers $k\ge0$
\begin{equation*}
t_n\gtrsim c_kn, \tag{12}\label{12}
\end{equation*}
where
\begin{equation*}
c_0:=2
\end{equation*}
and
\begin{equation*}
c_{k+1}:=f(c_k).
\end{equation*}
The function $f$ is continuously increasing on $[2,\infty)$. Also, $f(c)>c$ for $c\in[2,8)$ and $f(c)<c$ for $c\in(8,\infty)$. It follows that $c_k\uparrow 8$ as $k\to\infty$. So, by \eqref{12},
\begin{equation*}
t_n\gtrsim8n;
\end{equation*}
that is, \eqref{5} holds with $c=8$. So, by \eqref{7} and \eqref{10},
\begin{equation*}
x_n\lesssim \sqrt{\frac68}\,2n^{1/2}=\sqrt3\,n^{1/2}
\quad\text{and}\quad x_n\gtrsim \frac2{\sqrt{\frac13+\frac88}}\,n^{1/2}=\sqrt3\,n^{1/2}.
\end{equation*}
Thus, \eqref{1} is proved. $\quad\Box$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/426341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
} |
solution of the equation $a^2+pb^2-2c^2-2kcd+(p+k^2)d^2=0$ i am wondering if there is a complete solution for the equation $a^2+pb^2-2c^2-2kcd+(p+k^2)d^2=0$ in which $a,b,c,d,k$ are integer(not all zero) and $p$ is odd prime.
| I'm not getting much 2-adic information for this one, but it should be easy enough to check all solutions mod 8 and mod 16 and see what happens.
To restrict anything, one property requires $p \equiv \pm 3 \pmod 8$ and the other requires $p \equiv 3 \pmod 4.$ Put them together, when $$p \equiv 3 \pmod 8 $$ and
$$ p | k, $$
then all four of your letters $$ a,b,c,d = 0.$$
The proof uses two flavors of anisotropy for binaries. The assumption is that at least one of
$ a,b,c,d $ is nonzero and $\gcd(a,b,c,d) = 1.$
First we have forced
$a^2 - 2 c^2 \equiv 0 \pmod p,$ so $a,c \equiv 0 \pmod p$ as $(2 | p) = -1.$ But then
$ p b^2 + p d^2 \equiv 0 \pmod {p^2},$ or
$ b^2 + d^2 \equiv 0 \pmod p,$ so $b,d \equiv 0 \pmod p$ as $(-1 | p) = -1.$ So
$ p | \gcd(a,b,c,d)$ contrary to assumption.
Otherwise, given a fixed $(p,k)$ once you have a nontrivial solution you get infinitely many
using automorphs of the indefinite part in variables $(c,d).$ That is, there may be many parametrized families of solutions of one type or another. But you can figure some of those out with a computer algebra system more easily than I can by hand.
The next interesting case is when $12 k^2 + 8p$ is a square, which means that the binary form
$T(c,d)=2c^2+2kcd-(p+k^2)d^2$ factors. So $3 k^2 + 2p$ is a square, which is not possible for even $k,$ so $k$ is odd and $2p \equiv 6 \pmod 8,$ or $p \equiv -1 \pmod 4.$ Unless $p=3$ we also need
$p \equiv -1 \pmod 3,$ or $p \equiv -1 \equiv 11 \pmod {12}.$
For example, with $p=11, k=1,3 k^2 + 2p = 25, p + k^2 = 12,$ we have
$$ a^2+11b^2-2c^2-2cd+12d^2 = a^2+11b^2-2(c-2d)(c+3d).$$
The value of the factorization is that we can take, for instance, $c = 2 d + 1, c + 3 d = 5 d + 1,$ and
$$ a^2+11b^2-2(5d+1) = 0.$$ Now $a^2 + 11 b^2$ is not even unless it is also
divisible by $4.$ We also need $ a^2 \equiv b^2 \equiv 1 \pmod 5.$ Put them together, we have a parametrized solution of sorts, with $$ a \equiv 1,4 \pmod 5, \; \; b \equiv 1,4 \pmod 5, \; \; a \equiv b \pmod 2$$ take $c = 2 d + 1$ and
$$ d = \frac{ a^2+11b^2-2}{10}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/38354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Eigenvalues of principal minors Vs. eigenvalues of the matrix Say I have a positive semi-definite matrix with least positive eigenvalue x. Are there always principal minors of this matrix with eigenvalue less than x?
(Here "semidefinite" can not be taken to include the case "definite" -- there should be a
zero eigenvalue.)
| No, it is not true;
Consider the matrix
$$A=
\begin{pmatrix}
\frac{9}{2} & \frac{9}{20} & \frac{21}{20} & -\frac{3}{2}\\
-\frac{79}{11} & -\frac{3}{110} & \frac{23}{110} & \frac{31}{11}\\
\frac{6}{11} & \frac{21}{55} & \frac{114}{55} & \frac{6}{11}\\
\frac{16}{11} & \frac{12}{55} & \frac{128}{55} & \frac{5}{11}
\end{pmatrix}
$$
which has eigenvalues 0,1,2,3 (so it is positive semi-definite, but not definite.)
The four principal minors are
$$\frac{27}{22},\frac{189}{110},\frac{153}{55},\frac{36}{11}$$
sorted in increasing order. This should give a definite negative answer to your question.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/123563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
solutions to special diophantine equations Let $0\le x,y,z,u,v,w\le n$ be integer numbers obeying
\begin{align*}
x^2+y^2+z^2=&u^2+v^2+w^2\\
x+y+v=&u+w+z\\
x\neq& w
\end{align*}
(Please note that the second equality is $x+y+v=u+w+z$ NOT $x+y+z=u+v+w$. This has lead to some mistakes in some of the answers below)
How can the solutions to the above equations be characterized. One class of solutions is as follows
\begin{align*}
x=&u\\
y=&w\\
z=&v
\end{align*}
| This may not give all the solutions, but it does give a 3-parameter family. Choose positive integers $a,b,c,d$ such that $$a+b=c+d,\quad a+c<b<a+d,\quad2a<b$$ Then $$b^2+(b-a-c)^2+(a+d-b)^2=c^2+d^2+(b-2a)^2$$ and $$b+(b-2a)+(a+d-b)=c+d+(b-a-c)$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/184202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Is there any simpler form of this function Assume that $n$ is a positive integer. Is there any simple form of this hypergeometric value $$_2\mathrm{F}_1\left[\frac{1}{2},1,\frac{3+n}{2},-1\right]?$$
| write $F(n)$ for your formula. Then
$$
F \left( 0 \right) =\frac{1}{4}\,\pi \\
F \left( 2 \right) =-\frac{3}{2}+\frac{3}{4}\,\pi \\
F \left( 4 \right) =-5+{\frac {15}{8}}\,\pi \\
F \left( 6 \right) =-{\frac {77}{6}}+{\frac {35}{8}}\,\pi \\
F \left( 8 \right) =-30+{\frac {315}{32}}\,\pi \\
F \left( 10 \right) =-{\frac {671}{10}}+{\frac {693}{32}}\,\pi
$$
and
$$
F \left( 1 \right) =-2+2\,\sqrt {2}\\
F \left( 3 \right) =-{\frac {20}{3}}+\frac{16}{3}\,\sqrt {2}\\
F \left( 5 \right) =-{\frac {86}{5}}+{\frac {64}{5}}\,\sqrt {2}\\
F \left( 7 \right) =-{\frac {1416}{35}}+{\frac {1024}{35}}\,\sqrt {2}\\
F \left( 9 \right) =-{\frac {5734}{63}}+{\frac {4096}{63}}\,\sqrt {2}\\
F \left( 11 \right) =-{\frac {46124}{231}}+{\frac {32768}{231}}\,
\sqrt {2}
$$
Now, can you guess the patterns?
| {
"language": "en",
"url": "https://mathoverflow.net/questions/200845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Infinite limit of ratio of nth degree polynomials The Problem
I have two recursively defined polynomials (skip to the bottom for background and motivation if you care about that) that represent the numerator and denominator of a factor and I want to find the limit of that factor as n goes to infinity.
$$n_0 = d_0 = 1$$
$$n_n = d_{n-1}x - n_{n-1}$$
$$d_n = d_{n-1}(x-1)-n_{n-1}$$
I was able represent this recursive relationship as a matrix and use eigenvalue matrix decomposition to find a closed form for $d_n$ and $n_n$. They are not very pretty:
$$d_n = \frac{2^{-n-1}}{\sqrt{x-4}} \left(\left(\sqrt{x-4}+\sqrt{x}\right) \left(x+\sqrt{x-4} \sqrt{x}-2\right)^n+\left(\sqrt{x-4}-\sqrt{x}\right) \left(x-\sqrt{x-4} \sqrt{x}-2\right)^n+\frac{2 \left(\left(x-\sqrt{x-4} \sqrt{x}-2\right)^n-\left(x+\sqrt{x-4} \sqrt{x}-2\right)^n\right)}{\sqrt{x}}\right)$$
$$ n_n = 2^{-n-1} \left(\left(x+\sqrt{x-4} \sqrt{x}-2\right)^n+\left(x-\sqrt{x-4} \sqrt{x}-2\right)^n+\frac{\left(x+\sqrt{x-4} \sqrt{x}-2\right)^n-\left(x-\sqrt{x-4} \sqrt{x}-2\right)^n}{\sqrt{\frac{x-4}{x}}}\right)$$
The first few terms of the ratio ($r_n = n_n/d_n$) are:
$$r_1 = \frac{x-1}{x-2}$$
$$r_2 = \frac{x^2-3 x+1}{x^2-4 x+3}$$
$$r_3 = \frac{x^3-5 x^2+6 x-1}{x^3-6 x^2+10 x-4}$$
$$r_4 = \frac{x^4-7 x^3+15 x^2-10 x+1}{x^4-8 x^3+21 x^2-20 x+5}$$
And so on.
All of these equations seem to have poles located solely at $0\le x\le 4$, moreover the denominator and numerator seem to never have imaginary roots, but I haven't proven that.
$x$ is assumed to be a positive real number, and because there are no poles $x > 4$ the limit seems to be well behaved and equal to 1(?) outside of this range. But within this range, I am extremely curious about what the limit is, if it even exists or is possible to evaluate.
Can someone let me know if it's possible to evaluate this limit in this range? And if so can you let me know how to go about it?
Background and Motivation
An ideal transmission line can be modeled as an inductor and capacitor, the inductor is in series with the load and the capacitor is in parallel.
The impedance of an inductor is given by $Z_L = i \omega L$ and the impedance of a capacitor is $Z_C = \frac{1}{i \omega C}$.
If we string several transmission lines together and end in an open circuit then we can start evaluating from the end using the impedance addition laws. We first add $Z_L + Z_C$ and then to combine with the second to last capacitor we must $\frac{1}{\frac{1}{Z_L+Z_C}+\frac{1}{Z_C}}$.
When we do that we find, interestingly that the ratio between $Z_C$ and this value is our real valued ratio $r_1$. We can then solve for $r_n$ by breaking up the ratio into a numerator and denominator and starting from the (n-1)th iteration, and find the relationship given at the top of the question, with $x = \omega ^2 L C$.
Because a series of short transmission lines strung together is the same thing as a long transmission line, I expected the limit of this ratio to be equal to 1. However, when attempting to evaluate this limit it seems that it is not that simple.
| Here is an explicit formula for your ratio $r_n=\frac{n_n}{d_n}$:
$$r_n=
\frac{\sum_{k=0}^n\binom{n+k}{2k}(-x)^k}
{\sum_{k=0}^n\binom{n+k+1}{2k+1}(-x)^k}.$$
Let $P_n(x)$ and $Q_n(x)$ be the numerator and denominator polynomials of $r_n$, respectively. Then both polynomials share a common recurrence; namely,
$$P_{n+2}+(x-2)P_{n+1}+P_n=0 \qquad \text{and} \qquad Q_{n+2}+(x-2)Q_{n+1}+Q_n=0.\tag1$$
They differ only in the initial condition where $P_0=1, P_1=1-x$ while $Q_0=1, Q_1=2-x$. The importance of such a description is that (1) the original recursive relations are decoupled here; (2) it is more amenable to an asymptotic analysis; (3) it reveals the roots being in $[0,4]$ due to the interlacing property of three-term recurrences.
Note. The original numerator and denominator differ by $\pm$ sign from $P_n$ and $Q_n$, but this makes no difference for the ratio $r_n$.
In fact (Fedor!),
$$r_n=\frac{P_n(x)}{Q_n(x)}=\sqrt{x}\,\frac{U_{2n}(\sqrt{x}/2)}{U_{2n+1}(\sqrt{x}/2)}$$
where $U_n(y)$ are Chebyshev polynomials of the 2nd kind, expressible as
$$U_n(y)=\frac{(y+\sqrt{y^2-1})^n-(y-\sqrt{y^2-1})^n}{2\sqrt{y^2-1}}.$$
If $y\geq1$, or equivalently $z=y+\sqrt{y^2-1}\geq1$, then
$$\lim_{n\rightarrow\infty}\frac{U_n(y)}{U_{n+1}(y)}=
\lim_{n\rightarrow\infty}\frac{z^n-z^{-n}}{z^{n+1}-z^{-n-1}}=\frac1z=y-\sqrt{y^2-1}.$$
If $0<y<1$ then the complex modulus $\vert z\vert=1$ and hence
$\lim_{n\rightarrow\infty}\frac{U_n(y)}{U_{n+1}(y)}$ fails to exist.
If $y=0$ then apparently the limit is $0$.
When $y=\frac{\sqrt{x}}2$, the conditions become
$$\lim_{n\rightarrow\infty}r_n(x)=\frac{x-\sqrt{x^2-4x}}2$$
if $x\geq4$ or $x=0$. Otherwise (if $0<x<4$) this limit does not exist for being oscillatory!
Finally, since the roots of Chebyshev polynomials $U_n(y)$ lie in $[-1,1]$ it follows that the roots of $U_n(\sqrt{x}/2)$ must be limited in the range $[0,4]$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/249549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
What is the term for this type of matrix? Is there an established term for the following type of square matrices?
$\begin{pmatrix}
c & c & c & c & \cdots & c & c \\
c & a & b & b & \cdots & b & b \\
c & b & a & b & \cdots & b & b \\
c & b & b & a & & b & b \\
\vdots & \vdots & \vdots & & \ddots & & \vdots \\
c & b & b & b & & a & b \\
c & b & b & b & \cdots & b & a \\
\end{pmatrix}$
The matrix contains just 3 different items $a, b, c$:
*
*The first row is $c$.
*The first column is $c$.
*The diagonal is $a$, except for the upper left corner.
*The remaining items are $b$.
Background: $a, b, c$ can be chosen such that the matrix is orthogonal, but has a constant first row. If the dimension is a square (i.e. the matrix is a $r^2 \times r^2$ matrix) then it is possible to choose all entries to be integers - up to a common (usually irrational) factor in front of the matrix for normalization.
| When $b = 0$, we have an $n \times n$ symmetric arrowhead matrix. When $b \neq 0$, we have
$$\begin{bmatrix}
c & c & c & \cdots & c & c \\
c & a & b & \cdots & b & b \\
c & b & a & \cdots & b & b \\
c & b & b & \cdots & b & b \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
c & b & b & \cdots & a & b \\
c & b & b & \cdots & b & a \\
\end{bmatrix} = \begin{bmatrix}
c-b & c-b & c-b & \cdots & c-b & c-b \\
c-b & a-b & 0 & \cdots & 0 & 0 \\
c-b & 0 & a-b & \cdots & 0 & 0 \\
c-b & 0 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
c-b & 0 & 0 & \cdots & a-b & 0 \\
c-b & 0 & 0 & \cdots & 0 & a-b \\
\end{bmatrix} + b \, 1_n 1_n^{\top}$$
which is the sum of a symmetric arrowhead matrix and a (nonzero) multiple of the all-ones matrix.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/262091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Series sum with coefficients that are Fibonacci numbers Let $m>n$ be positive integers. Consider the following sum:
\begin{equation}
S(m,n)=\sum_{k=0}^n F_{k+1} \frac{{m-1\choose{k}} {n-1\choose{k}}}{ {m+n-1\choose{2k+1}} {2k\choose{k}}},
\end{equation}
where $F_k$ denotes the $k$th Fibonacci number. I would like to understand how $S(m,n)$ varies as a function of $m$ and $n$, and possibly upper/lower bound $S(m,n)$ in terms of an explicit function of $m,n$ (as opposed to a series sum).
| First notice that
$$\frac{{m-1\choose{k}} {n-1\choose{k}}}{ {m+n-1\choose{2k+1}} {2k\choose{k}}} = \frac{(m-1)!(n-1)!(m+n-2-2k)!}{(2k+1)(n-1-k)!(m-1-k)!(m+n-1)!} = \frac{(2k+1)\binom{m+n-2-2k}{n-1-k}}{(m+n-1)\binom{m+n-2}{n-1}}.$$
Then $\binom{m+n-2-2k}{n-1-k}$ can be expressed as
$$\binom{m+n-2-2k}{n-1-k}=[x^{n-1-k}]\ \frac{1}{\sqrt{1-4x}}\left(\frac{1-\sqrt{1-4x}}{2x}\right)^{m-n},$$
where $[x^d]$ is the operator taking the coefficient of $x^d$.
Similarly we can express $(2k+1)F_{k+1}$ as
$$(2k+1)F_{k+1}=[x^{2k}]\ \frac{\partial}{\partial x}\frac{x}{1-x^2-x^4} = [x^{2k}]\ \frac{1+x^2+3x^4}{(1-x^2-x^4)^2} = [x^k]\ \frac{1+x+3x^2}{(1-x-x^2)^2}.$$
Hence,
$$S(m,n) = \frac{1}{(m+n-1)\binom{m+n-2}{n-1}}\cdot [x^{n-1}]\ \frac{1+x+3x^2}{(1-x-x^2)^2\sqrt{1-4x}}\left(\frac{1-\sqrt{1-4x}}{2x}\right)^{m-n}$$
$$=\frac{1}{(m+n-1)\binom{m+n-2}{n-1}}\cdot [y^{n-1}]\ \frac{1+y+2y^2-6y^3+3y^4}{(1-y+2y^3-y^4)^2(1-y)^m},$$
where the latter expression is obtained with Lagrange inversion. The asymptotic of the coefficients of this generating function can now be obtained with the standard methods.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/279734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Quasi-concavity of $f(x)=\frac{1}{x+1} \int_0^x \log \left(1+\frac{1}{x+1+t} \right)~dt$ I want to prove that function
\begin{equation}
f(x)=\frac{1}{x+1} \int\limits_0^x \log \left(1+\frac{1}{x+1+t} \right)~dt
\end{equation}
is quasi-concave. One approach is to obtain the closed form of the integral (provided below) and then prove that the result is quasi-concave. I tried this but it seems to be difficult. Do you have any idea how to prove the quasi-concavity?
\begin{align}
f(x)&=\frac{1}{x+1} \int\limits_0^x \log \left(1+\frac{1}{x+1+t} \right)~dt\\
&= \frac{1}{x+1} \left[ \log(2x + 2) + (2x+1)\log \left(1+\frac{1}{2x+1} \right) -\log(x+2) - (x+1)\log \left( 1+\frac{1}{x+1} \right) \right]
\end{align}
| For $x>-1/2$, we have
$$
f(x)=\ln 4-\frac{x+2}{x+1}\, \ln \frac{x+2}{x+1}
-\frac{2 x+1}{x+1} \ln\frac{2 x+1}{x+1}
$$
and hence
$$
f'(x)=\frac1{(x+1)^2}\ln \left(1+\frac{1-x}{2 x+1}\right),
$$
which is $>0$ for $x\in(-1/2,1)$ and $<0$ for $x>1$. So, $f$ increases on $(-1/2,1]$ and decreases on $[1,\infty)$. So, $f$ is quasi-concave.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/303704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
order of a permutation and lexicographic order Let $M$ be an $n\times m$ matrix, say with entries in $\left\{0,1\right\}$ ; and let $\mathcal C(M)$ be the $n\times m$ matrix such that there exists $P$, $m\times m$ permutation matrix such that $M.P=\mathcal C(M)$ and such that the columns of $\mathcal C(M)$ are lexicographically increasing (1) (for a formal definition of (1) see reflexive relations that are "tridiagonally cycle-indexed" (or "almost ordered" matrices/relations))
$\mathcal R(M):= \mathcal C(M^t))^t$ is the matrix you get from $M$ that rows are lexicographically increasing.
We now say that $\mathcal L=\mathcal C\,o\,\mathcal R$.
Let $Q$ be a $m\times m$ permutation matrix s.t. $Q^q=Id$. We define $\mathcal L_Q$ to be such that $\mathcal L_Q(M)=\mathcal L(M).Q$ for all $M$ of size $n\times m$.
Does there exists $r\in \mathbb N$ such that $\mathcal L_Q^{r+iq}(M)=\mathcal L_Q^r(M)$ for any $i\in \mathbb N$
The cases that seem the most interesting to me are $\mathcal L_{Id}=\mathcal L$ and $\mathcal L_J$ where $J$ is the $i\mapsto m-i$ permutation matrix, I talked about these cases in the upper link.
Example with $m=n=4$
$Q=J=\begin{matrix}
0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 \\ 1& 0 & 0 & 0
\end{matrix}$
(so $q=2$)
Let's take
$M=\begin{matrix}
0 & 0 & 1 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \\ 1& 1 & 1 & 0
\end{matrix}$
We range rows according to lexicographic order :
$\mathcal R(M)=\begin{matrix}
0 & 0 & 1 & 1\\ 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 1 & 1 & 0
\end{matrix}$
And now columns...
$\mathcal L(M)=\begin{matrix} 0 & 0 & 1 & 1\\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 0 \\ 1 & 1 & 0 & 1
\end{matrix}$
And we multiply on the right by $J$:
$\mathcal L_J(M)= \begin{matrix} 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 \\ 0 & 1 & 0 & 1 \\ 1 & 0 & 1 & 1
\end{matrix}$
If we apply $\mathcal L_J$ to $\mathcal L_J(M)$ we then get :
$\mathcal L^2_J(M)= \begin{matrix} 1 & 1 & 0 & 0\\ 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1 \\ 0 & 1 & 1 & 0
\end{matrix}$
etc....
We will get :
$\mathcal L^4_J(M)=\mathcal L^6_J(M)=\begin{matrix} 1 & 1 & 0 & 0\\ 1 & 0 & 1 & 0 \\ 0 & 0 & 1 & 1 \\ 0 & 1 & 1 & 1
\end{matrix}$
We could verify that $\mathcal L^3_J(M)\ne \mathcal L^5_J(M)(\ne L^4_J(M))$ and then $r=4$ is the smallest possible (and $k$ such that $i\mapsto L^{r+i}_J(M)$ is $2$-periodic but not constant)
| There are counterexamples for $Q=J$.
Here is a binary $6\times7$ binary matrix $M$ that belongs to an orbit of $\mathcal{L}_J$ with period $3$:
$\begin{matrix}
1&1&1&0&0&0&0\\
1&1&0&1&1&0&0\\
0&0&1&1&0&1&0\\
0&0&1&0&1&0&1\\
1&0&0&1&0&1&1\\
0&1&0&0&1&1&1\\
\end{matrix}$
Here is another with period $4$:
$\begin{matrix}
1&1&0&0&0&0&0\\
1&0&1&1&0&0&0\\
0&1&1&0&1&0&0\\
0&1&0&1&0&1&1\\
1&0&0&0&1&1&0\\
0&0&0&1&1&1&0\\
\end{matrix}$
and another with period $6$:
$\begin{matrix}
1&1&1&0&0&0&0\\
1&1&0&1&1&0&0\\
0&0&1&1&0&1&0\\
0&0&1&1&0&0&1\\
1&0&0&0&0&1&1\\
0&1&0&0&0&1&1\\
\end{matrix}$
On the whole set of $7\times 7$ binary matrices, $\mathcal{L}_J$ gets
$$
\begin{array}{r l}
326\,166&\text{fixed matrices}\\
86\,146\,036&\text{distinct orbits of length }2\\
94&\text{distinct orbits of length }3\\
5\,400&\text{distinct orbits of length }4\\
8&\text{distinct orbits of length }5\\
196&\text{distinct orbits of length }6\\
\end{array}
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/306572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Product of arbitrary Mersenne numbers Let be $p$ and $q$ two arbitrary Mersenne numbers.
Is there a simple proof that $p\cdot q-1$ can never been a square?
$p\cdot q-1$ can instead be a 3rd power for:
$p=3,q=3$
$p=7,q=31$
$p=63,q=127$
In these cases it is interesting to see that $p\cdot q+1$ is an even semi-prime, as in the case $3\cdot 3 +1=10$ or $7\cdot 31+1=218$ or $63\cdot 127 +1=4001\cdot 2$
| Write $p=2^m-1$ and $q=2^n-1$ where $m>n$ without loss of generality. If $pq-1=(2^m-2^{m-n}-1)2^n$ is a square, then so is $2^m-2^{m-n}-1$. But then $m=n+1$, as $m\ge n+2$ implies $2^m-2^{m-n}-1\equiv 3\pmod 4$. Now, $2^m-2^{m-n}-1=2^m-3$, which clearly is not a square if $m$ is even, and which is not a square if $m$ is odd either as in this case $2^m-3\equiv (-1)^m\equiv -1\pmod 3$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/330693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Approximating $1/x$ by a polynomial on $[0,1]$ For every $\varepsilon > 0$, is there a polynomial of $x^4$ without constant term, i.e., $p(x^4) = a_1 x^4 + a_2 x^8 + \cdots +a_n x^{4n}$, such that
$$\|p(x^4)x^2 - x\| < \varepsilon $$
for every $x \in [0,1]$?
| Of course there is. Let $P$ approximate on $[0, 1]$ with error no greater than $\varepsilon$ the function $$f(x) = \min\{\varepsilon^{-5}, x^{-5/4}\} ,$$
and define $p(x) = x P(x)$. If $x \geqslant \varepsilon^4$, then $f(x^4) = x^{-5}$ and hence
$$|p(x^4) x^2 - x| = x^6 |P(x^4) - f(x^4)| \leqslant x^6 \varepsilon \leqslant \varepsilon .$$
On the other hand, if $x < \varepsilon^4$, we simply have $f(x^4) = \varepsilon^{-5}$, and hence
$$\begin{aligned}|p(x^4)x^2 - x| & \leqslant x^6 |P(x^4) - f(x^4)| + x^6 |f(x^4)| + |x| \\ & \leqslant \varepsilon^{24} \varepsilon + \varepsilon^{24} \varepsilon^{-5} + \varepsilon^4 \leqslant \varepsilon ,\end{aligned}$$
provided that $\varepsilon$ is small enough.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/333657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to prove this high-degree inequality Let $x$,$y$,$z$ be positive real numbers which satisfy $xyz=1$. Prove that:
$(x^{10}+y^{10}+z^{10})^2 \geq 3(x^{13}+y^{13}+z^{13})$.
And there is a similar question: Let $x$,$y$,$z$ be positive real numbers which satisfy the inequality
$(2x^4+3y^4)(2y^4+3z^4)(2z^4+3x^4) \leq(3x+2y)(3y+2z)(3z+2x)$. Prove this inequality:
$xyz\leq 1$.
| These inequalities are algebraic and thus can be proved purely algorithmically.
Mathematica takes a minute or two for this proof of your first inequality:
Here is a "more human" proof:
Substituting $z=\frac1{xy}$, rewrite your first inequality as
\begin{equation}
f(x,y)\mathrel{:=}\left(\frac{1}{x^{10} y^{10}}+x^{10}+y^{10}\right)^2-3 \left(\frac{1}{x^{13} y^{13}}+x^{13}+y^{13}\right)\ge0
\end{equation}
and then as
\begin{align}
g(x,y)&\mathrel{:=}f(x,y)x^{20} y^{20} \\
&=x^{40} y^{20}-3 x^{33} y^{20}+2 x^{30} y^{30}+x^{20} y^{40}-3 x^{20} y^{33} \\
&+2 x^{20} y^{10}+2 x^{10}
y^{20}-3 x^7 y^7+1\ge0,
\end{align}
for $x,y>0$.
Further,
\begin{align}
g_1(x,y)\mathrel{:=}{}\frac{g'_x(x,y)}{x^6 y^7}&=40 x^{33} y^{13}-99 x^{26} y^{13}+60 x^{23} y^{23}+20 x^{13} y^{33} \\
&-60 x^{13} y^{26}+40 x^{13} y^3+20 x^3
y^{13}-21, \\
g_2(x,y)\mathrel{:=}\frac{g'_y(x,y)}{x^7 y^6}&=20 x^{33} y^{13}-60 x^{26} y^{13}+60 x^{23} y^{23}+40 x^{13} y^{33} \\
&-99 x^{13} y^{26}+20 x^{13} y^3+40 x^3
y^{13}-21,
\end{align}
and the only positive roots of the resultants of $g_1(x,y)$ and $g_2(x,y)$ with respect to $x$ and $y$ are $y=1$ and $x=1$, respectively. So, $(1,1)$ is the only critical point of $g$.
Next, all the coefficients of the polynomial $g(1+u,1+v)$ in $u,v$ are nonnegative. Therefore and because of the symmetry $x\leftrightarrow y$, it remains to consider the cases (i) $0\le x\le1$ and $y>0$ is large enough and (ii) $x=0$.
For case (i), we have $g(x,y)\ge1 - 3 x^7 y^7 + 2 x^{10} y^{20}>0$. For case (ii), we have $g(0,y)=1>0$.
So, your first inequality is proved, again.
It took Mathematica about 1.8 hours to prove your second inequality (click on the image to enlarge it):
The latter proof would probably take many thousands pages.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/385942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
} |
EM-wave equation in matter from Lagrangian Note
I am not sure if this post is of relevance for this platform, but I already asked the question in Physics Stack Exchange and in Mathematics Stack Exchange without success.
Setup
Let's suppose a homogeneous dielectric with a (spatially) local dielectric response function $\underline{\underline{\epsilon}} (\omega)$ (in general a tensor), such that we have the linear response relation $$\mathbf{D}(\mathbf{x}, \omega) = \underline{\underline{\epsilon}} \left(\omega \right) \mathbf{E}(\mathbf{x}, \omega) \, ,$$ for the displacement field $\mathbf{D}$ in the dielectric.
We can now write down a Lagrangian in Fourier space, describing the EM-field coupling to the dielectric body
$$\mathcal{L}=\frac{1}{2}\left[\mathbf{E}^{*}\left(x,\omega\right) \cdot (\underline{\underline{\epsilon}} \left(\omega \right)-1) \mathbf{E}\left(x,\omega\right)+|\mathbf{E}|^{2}\left(x,\omega\right)-|\mathbf{B}|^{2}\left(x,\omega\right) \right] \, .$$
If we choose a gauge
\begin{align}
\mathbf{E} &= \frac{i \omega}{c} \mathbf{A} \\
\mathbf{B} &= \nabla \times \mathbf{A} \, ,
\end{align}
such that we can write the Lagrangian (suppressing arguments) in terms of the vector potential $\mathbf{A}$ as
$$\mathcal{L} =\frac{1}{2}\left[\frac{\omega^2}{c^2} \mathbf{A}^{*} \cdot (\underline{\underline{\epsilon}}- \mathbb{1}) \mathbf{A}+ \frac{\omega^2}{c^2} |\mathbf{A}|^{2}-|\nabla \times \mathbf{A}|^{2}\right] \, . $$
And consequently we have the physical action
$$
S[\mathbf{A}] = \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} \left(\mathbf{A}\right) \, .
$$
Goal
My goal is to derive the EM-wave equation for the electric field in the dielectric media.
Idea
So my ansatz is the following: If we use Hamilton's principle, we want the first variation of the action to be zero
\begin{align}
0 = \delta S[\mathbf{A}] &= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} S[\mathbf{A} + \varepsilon \mathbf{h}] \right|_{\varepsilon=0} \\
&= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} (\mathbf{A} + \varepsilon \mathbf{h}) \right|_{\varepsilon=0} \\ &= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \mathbf{A}^* \cdot ({\underline{\underline{\epsilon}}}-\mathbb{1}) \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot ({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot \mathbf{A} \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \mathbf{A}^* ) \cdot ( \nabla \times \mathbf{h}) - (\nabla \times \mathbf{h}^* ) \cdot ( \nabla \times \mathbf{A}) \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] \cdot \mathbf{h}^* + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{A} \cdot \mathbf{h}^* \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \nabla \times \mathbf{A}^* ) \cdot \mathbf{h} - (\nabla \times \nabla \times \mathbf{A} ) \cdot \mathbf{h}^* \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \underbrace{\left[ \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \underbrace{\left[ \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h}^* \Bigg) \, ,
\end{align}
for all $\mathbf{h}(\mathbf{x}, \omega)$. And consequently we get the equations
\begin{align}
\frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* &= 0 \\
\frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} &= 0 \, .
\end{align}
If we suppose a lossy dielectric body, such that $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$, the equations are in contradiction.
Time-domain
An analogues derivation in the time-domain (which I can post here on request), yields the wave equation (in Fourier space)
$$\frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}-1) \mathbf{A} + \frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}^{\dagger} -1) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A} - \left( \nabla \times \nabla \times \mathbf{A} \right) = 0 \, .$$
This result is also not resembling the expected result, for $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$.
Question
What went wrong in the calculation?
| A simple observation which proves that the Lagrangian does not represent the reality of physical phynomemes, we can divide the Lagrangian into two parts, the part which contains the free field (without the presence of charges) and the part which contains the interaction part between the EM field and the material:
A simple observation that proves that the Lagrangian does not represent the reality of physical phenomena, we can divide the Lagrangian into two parts, the part that contains the free field (without the presence of charges) and the part that contains the part of the interaction between the EM field and matter :
$ \mathcal{L}=\left[\textbf{E*}(\textbf{r},\omega).(\underline{\underline{\epsilon}}-1)\textbf{E}(\textbf{r},\omega)+|\textbf{E}|^{2}(\textbf{r},\omega)-|\textbf{B}|^{2}(\textbf{r},\omega)\right]$
$\mathcal{L}=\mathcal{L}_{field}+\mathcal{L}_{interaction}$
where :
$\begin{cases}\mathcal{L}_{f}=|\textbf{E}|^{2}(\textbf{r},\omega)-|\textbf{B}|^{2}(\textbf{r},\omega)\\ \mathcal{L}_{i}=\textbf{E*}(\textbf{r},\omega).(\underline{\underline{\epsilon}}-1).\textbf{E}(\textbf{r},\omega)\end{cases}$
it is sufficient that $\underline{\underline{\epsilon}}=1$ so that it is not interaction ( $\mathcal{L}_{i}=0)$
This means that vacuum is the only medium that does not absorb or disperses EM waves for all values of the pulse, which is physically false, there are transparent media such as glass that allow light to pass for a certain pulse band.
I suggest to replace the term $(\underline{\underline{\epsilon}}-1)$ by $(\underline{\underline{\epsilon}}- \underline{\underline{\epsilon}}^{\dagger})$ in the interaction Lagrangian to at least contain the case of transparent media, i.e. in the case where $(\underline{\underline{\epsilon}}= \underline{\underline{\epsilon}}^{\dagger})$ , we obtain a symmetrical tensor ( diagonalizable).
I do not assume that the suggested Lagrangian is the right mold in which to pour the physical optics, but it has the merit of containing more information.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/404380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Small Galois group solution to Fermat quintic I have been looking into the Fermat quintic equation $a^5+b^5+c^5+d^5=0$. To exclude the trivial cases (e.g. $c=-a,d=-b$), I will take $a+b+c+d$ to be nonzero for the rest of the question. It can be shown that if $a+b+c+d=0$, then the only solutions in the rational numbers (or even the real numbers) are trivial.
I had the idea of representing the values $a, b, c, d$ as roots of a quartic polynomial, and then trying to force the Galois group to have low order. As the Fermat quintic is symmetric, it yields an equation in the coefficients of the quartic, which are satisfied by the quartic equations: $5x^4-5x^3+5qx^2-5rx+5(q-1)(q-r)+1=0$, so clearly the most general case of Galois group $S_4$ is attainable. One could also fix one or two of the roots, obtaining a smaller Galois group.
However, what I have yet to discover, is a quartic in the family above which has a square discriminant, forcing the Galois group to be a subgroup of the alternating group $A_4$. Is there an example of such a quartic? Is there one with Galois group a proper subgroup of $A_4$? Better yet, are there infinite families of such quartics?
| As I understand the question, the OP wishes to solve,
$$x_1^k + x_2^k = x_3^k + x_4^k\tag1$$
for $k = 5$ and where the $x_i$ are roots of quartics. If we allow that the $x_i$ are roots of different quartics, then there are in fact solutions for $k = 5,6,8$.
I. k = 5
The first identity below just uses quadratics,
$$(a\sqrt2+b)^5+(-b+c\sqrt{-2})^5 = (a\sqrt2-b)^5+(b+c\sqrt{-2})^5$$
with Pythagorean triples $(a,b,c)$ and was known by Desboves. The second is by yours truly,
$$(\sqrt{p}+\sqrt{q})^5+(\sqrt{p}-\sqrt{q})^5 = (\sqrt{r}+\sqrt{s})^5 + (\sqrt{r}-\sqrt{s})^5$$
where,
\begin{align}
p &= 5vw^2,\quad q = -1+uw^2\\
r &= 5v,\quad\quad s = -(u+10v)+w^3\end{align}
and $w = u^2+10uv+5v^2$.
II. k = 6
We use the same form,
$$(\sqrt{p}+\sqrt{q})^6+(\sqrt{p}-\sqrt{q})^6 = (\sqrt{r}+\sqrt{s})^6 + (\sqrt{r}-\sqrt{s})^6$$
where,
\begin{align}
p &= -(a^2+14ab+b^2)^2+(ac+bc+13ad+bd)(c^2+14cd+d^2)\\
q &= \;\;(a^2+14ab+b^2)^2-(ac+13bc+ad+bd)(c^2+14cd+d^2)\\
r &= \;\;(c^2+14cd+d^2)^2-(ac+13bc+ad+bd)(a^2+14ab+b^2)\\
s &= -(c^2+14cd+d^2)^2+(ac+bc+13ad+bd)(a^2+14ab+b^2)
\end{align}
III. k = 8
Still using the same form,
$$(\sqrt{p}+\sqrt{q})^8+(\sqrt{p}-\sqrt{q})^8 = (\sqrt{r}+\sqrt{s})^8 + (\sqrt{r}-\sqrt{s})^8$$
where,
\begin{align}
p &= n^3-2n+1\\
q &= n^3+2n-1\\
r &= n^3-n-1\\
s &= n^3-n+1\end{align}
P.S. Unfortunately, $k= 7$ and $k=9$ does not seem to be amenable to the same approach.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/430627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Coefficients of number of the same terms which are arising from iterations based on binary expansion of $n$ Let
$$\ell(n)=\left\lfloor\log_2 n\right\rfloor$$
Let
$$T(n,k)=\left\lfloor\frac{n}{2^k}\right\rfloor\operatorname{mod}2$$
Here $T(n,k)$ is the $(k+1)$-th bit from the right side in the binary expansion of $n$.
Let $a(n)$ be the sequence of positive integers such that we start from $A:=0$ and then for $k=0..\ell(n)$ we iterate:
*
*If $T(n,k)=1$, then $A:=\left\lfloor\frac{A}{2}\right\rfloor$;
otherwise $A:=A+1$;
*$A:=A+1$.
Then $a(n)$ is the resulting value of $A$.
For example for $n=18=10010_2$ we have:
*
*$A:=0$;
*$T(n,0)=0$, $A:=A+1=1$, $A:=A+1=2$;
*$T(n,1)=1$, $A:=\left\lfloor\frac{A}{2}\right\rfloor=1$, $A:=A+1=2$;
*$T(n,2)=0$, $A:=A+1=3$, $A:=A+1=4$;
*$T(n,3)=0$, $A:=A+1=5$, $A:=A+1=6$;
*$T(n,4)=1$, $A:=\left\lfloor\frac{A}{2}\right\rfloor=3$, $A:=A+1=4$.
Then $a(18)=4$.
Let
$$R(n,k)=\sum\limits_{j=2^{n-1}}^{2^n-1}[a(j)=k]$$
I conjecture that
*
*$R(n,k)=0$ if $n<1$ or $k>n$;
*$R(n,k)=1$ if $k=1$ or $k=n$;
*$R(n,k)=R(n-1,k-1)+R(n-1,2(k-1))+R(n-1,2k-1)$ otherwise.
To verify this conjecture one may use this PARI prog:
a(n) = my(A=0); for(i=0, logint(n, 2), if(bittest(n, i), A\=2, A++); A++); A
R1(n) = my(v); v=vector(n, i, sum(k=2^(n-1), 2^n-1, a(k)==i))
R(n, k) = if(k==1, 1, if(k<=n, R(n-1, k-1) + R(n-1, 2*(k-1)) + R(n-1, 2*k-1)))
R2(n) = my(v); v=vector(n, i, R(n,i))
test(n) = R1(n)==R2(n)
Is there a way to prove it? Is there a suitable closed form for $R(n,k)$?
| In other words, if $(b_\ell b_{\ell-1}\dots b_0)_2$ is the binary representation of $n$, then
$$a(n) = g(g(\dots g(g(0,b_0),b_1)\dots ),b_{\ell-1}), b_\ell),$$
where
$$g(A,b) = \begin{cases} A+2, &\text{if } b=0;\\
\left\lfloor \frac{A+2}2\right\rfloor, &\text{if } b=1.
\end{cases}$$
Consider a number triangle obtained from $A=0$ by iteratively applying $g(\cdot,0)$ and $g(\cdot,1)$:
$$
\begin{gathered}
0 \\
1 \ \ \ \ \ \ \ \ 2 \\
1 \ \ \ \ 3 \ \ \ \ 2 \ \ \ \ 4 \\
1\ 3\ 2\ 5\ 2\ 4\ 3\ 6 \\
\dots
\end{gathered}
$$
Let $f(n,k)$ be the multiplicity of $k$ at the level $n\in\{0,1,2\dots\}$ in this triangle.
It is easy to see that each number $k\geq 1$ in this triangle may result only from the following numbers in the previous row: $2k-2$, $2k-1$, or $k-2$, implying that $f$ satisfies the recurrence formula:
$$f(n,k) = \begin{cases}
\delta_{k,0}, & \text{if }n=0; \\
f(n-1,2k-2) + f(n-1,2k-1) + f(n-1,k-2), & \text{if }n>0.
\end{cases}$$
The quantity $R(n,k)$ accounts for numbers $k$ in the $n$th row, but only for those that resulted from $g(\cdot,1)$, that is
$$R(n,k) = f(n-1,2k-2) + f(n-1,2k-1).$$
Expanding this formula using the recurrence for $f$, we get
\begin{split}
R(n,k) &= f(n-2,4k-5) + f(n-2,4k-6) + f(n-2,2k-4) \\
&\quad + f(n-2,4k-3) + f(n-2,4k-4) + f(n-2,2k-3) \\
&= R(n-1,2k-2) + R(n-1,2k-1) + R(n-1,k-1).
\end{split}
QED
| {
"language": "en",
"url": "https://mathoverflow.net/questions/442078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unit fraction, equally spaced denominators not integer I've been looking at unit fractions, and found a paper by Erdős "Some properties of partial sums of the harmonic series" that proves a few things, and gives a reference for the following theorem:
$$\sum_{k=0}^n \frac{1}{m+kd}$$ is not an integer.
The source is:
Cf. T. Nagell, Eine Eigenschaft gewissen Summen, Skrifter Oslo, no. 13 (1923) pp. 10-15.
Question
Although I would like to find this source, I've checked with my university library and it seems pretty out of reach. What I'm really hoping for is a source that's more recent or even written in English.
Finding this specific source isn't everything, I'll be fine with pointers to places with similar results.
| I tracked down Nagell's paper in early 1992, since I had found a proof and wanted to see whether it was the same as his. (It turned out to be essentially the same idea.) Unfortunately, I've since lost my photocopy of his paper, but it came from UIUC, so that would be where I'd start looking. If I remember right, the journal where it appeared is incredibly obscure, and UIUC was the only place in North America that had a copy.
Here's a proof copied from my very old TeX file. A bit awkwardly written, but it explains how to do the case analysis, which is the part that makes this approach simpler than what Erdős did.
Suppose that $a$, $b$, and $n$ are positive integers and
$$\frac{1}{a}+\frac{1}{a+b}+\frac{1}{a+2b}+\cdots+\frac{1}{a+nb} = c \in
\mathbb{Z}$$ We can take $\gcd(a,b) = 1$ and $a > 1$, and it is easy to
show that $n > 2$.
Suppose that $b$ is odd. In the arithmetic progression, there is then a
unique number $a + mb$ divisible by the highest possible power of $2$,
because for all $k$, the progression runs through a cycle modulo $2^k$ which
contains each value exactly once. Then multiplication by $\ell =
\hbox{lcm}(a,a+b,\ldots,a+nb)$ gives
$$\frac{\ell}{a}+\frac{\ell}{a+b}+\frac{\ell}{a+2b}+\cdots+\frac{\ell}{a+nb}
= \ell c.$$ All of the terms here except $\frac{\ell}{a+mb}$ are even.
Thus, $b$ cannot be odd.
Now suppose that $b$ is even, and $b \le \frac{n-2}{3}$. By Bertrand's
Postulate, there is a prime $p$ such that $\frac{n+1}{2} < p < n+1$. Then
$p$ does not divide $b$, and $p$ is odd. We must have $a \le n$, because
$$1 \le c = \frac{1}{a}+\frac{1}{a+b}+\cdots+\frac{1}{a+nb} < \frac{n+1}{a}.$$
Since $b$ generates the additive group modulo $p$ and $n+1 > p$, at least
one of the numbers $a, a+b, \dots, a+nb$ is divisible by $p$. At most
two are, since $2p > n+1$. Suppose that $p$ divides only the term $a+kb$.
Then
$$\frac{1}{a+kb} = c-\sum_{j \neq k}{\frac{1}{a+jb}}.$$
The denominator of the left side is divisible by $p$, but that is not true
of the right side. Thus, $p$ must divide two terms.
Now suppose that $p$ divides $a+{\ell}b$ and $a+kb$, with $mp = a+{\ell}b <
a+kb = (m+b)p$. Then
$$\frac{1}{p}\left(\frac{2m+b}{m(m+b)}\right) =
c - \sum_{j \neq \ell,k}{\frac{1}{a+jb}}.$$
This implies that $p \mid (2m+b)$. However,
$$a+{\ell}b \le n + (n-p)\left(\frac{n-2}{3}\right) < n + \frac{n}{2}\left(\frac{n-2}{3}\right).$$
Therefore,
$$m < \frac{a+{\ell}b}{n/2} < \frac{n-2}{3} + 2.$$
It follows that $2m+b \le n+1$. However, $n+1 < 2p$, so $2m+b = p$. This
contradicts the fact that $b$ is even and $p$ is odd.
Finally, suppose that $b$ is even, and $b \ge \frac{n-1}{3}$. Since $a > 1$
and $a$ is odd, we must have $a \ge 3$. We must also have $b \ge 4$, since
if $b=2$, then Bertrand's Postulate and the fact that $n \ge a$ (as above)
imply that one of the terms is a prime, which does not divide any other
term.
Now, we show that $n \le 13$. To do that, note that
$$\frac{1}{a}+\frac{1}{a+b}+\cdots+\frac{1}{a+nb} < \frac{1}{3} + \frac{1}{7} + \frac{3}{n-1}\left(\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n}\right).$$
A simple computation shows that for $n \ge 14$,
$$\frac{1}{2} + \frac{1}{3} + \cdots + \frac{1}{n} < \frac{11}{63}(n-1),$$
which implies that the sum is less than 1. Hence, we must have $n \le 13$.
Thus, the sum is at most
$$\frac{1}{3} + \frac{1}{7} + \frac{1}{11} + \cdots + \frac{1}{47} + \frac{1}{51} + \frac{1}{55} < 1.$$
Therefore, the sum is never an integer, and the theorem holds.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/39326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 1
} |
A gamma function identity In some of my previous work on mean values of Dirichlet L-functions, I came upon the following identity for the Gamma function:
\begin{equation}
\frac{\Gamma(a) \Gamma(1-a-b)}{\Gamma(1-b)}
+ \frac{\Gamma(b) \Gamma(1-a-b)}{ \Gamma(1-a)}
+ \frac{\Gamma(a) \Gamma(b)}{ \Gamma(a+b)} =
\pi^{\frac12}
\frac{\Gamma\left(\frac{ 1-a-b}{2}\right) }{\Gamma\left(\frac{a+b}{2}\right)}
\frac{\Gamma\left(\frac{a}{2}\right)}{\Gamma\left(\frac{1-a}{2}\right)}
\frac{\Gamma\left(\frac{b}{2}\right)}{\Gamma\left(\frac{1-b}{2}\right)}.
\end{equation}
As is often the case, once one knows such a formula should be true then it is easy to prove it. I give my proof below. My questions are 1) Has this formula been observed before? I have no idea how to search the literature for such a thing. 2) Is there a better proof? (Of course this is totally subjective, but one thing that would please me would be to avoid trigonometric functions since they do not appear in the formula.)
Proof.
Using
\begin{equation}
\frac{\Gamma(\frac{s}{2})}{\Gamma(\frac{1-s}{2})} = \pi^{-\frac12} 2^{1-s} \cos({\textstyle \frac{\pi s}{2}}) \Gamma(s),
\end{equation}
the right hand side is
\begin{equation}
2 \frac{\cos(\frac{\pi a}{2}) \cos(\frac{\pi b}{2}) \Gamma(a) \Gamma(b)}{\cos(\frac{\pi (a + b)}{2}) \Gamma(a+b)}.
\end{equation}
On the other hand, the left hand side is
\begin{equation}
\frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)}
\left(
\frac{\Gamma(a+b) \Gamma(1-a-b)}{\Gamma(b) \Gamma(1-b)}
+ \frac{\Gamma(a+b) \Gamma(1-a-b)}{\Gamma(a) \Gamma(1-a)}
+ 1
\right),
\end{equation}
which becomes after using $\Gamma(s) \Gamma(1-s) = \frac{\pi}{\sin(\pi s)}$,
\begin{equation}
\frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)}
\left(\frac{\sin(\pi a) + \sin( \pi b) + \sin(\pi(a + b))}{\sin(\pi(a+b))} \right).
\end{equation}
Using trig formulas, we get that this is
\begin{equation}
2 \frac{\Gamma(a)\Gamma(b)}{\Gamma(a + b)} \frac{\sin(\frac{\pi}{2}(a+b)) \cos(\frac{\pi}{2}(a-b)) + \sin(\frac{\pi}{2}(a+b)) \cos(\frac{\pi}{2}(a+b)) }{\sin(\pi(a+b))}
\end{equation}
I think I've run out of space? The rest is easy trig.
| This is merely a variation of your own proof, Matt, but I believe it makes things clearer.
The first step is to define $c:=1-a-b$. Then, your identity takes the form
$\dfrac{\Gamma\left(b\right)\Gamma\left(c\right)}{\Gamma\left(b+c\right)}+\dfrac{\Gamma\left(c\right)\Gamma\left(a\right)}{\Gamma\left(c+a\right)}+\dfrac{\Gamma\left(a\right)\Gamma\left(b\right)}{\Gamma\left(a+b\right)}=\pi^{1/2}\dfrac{\Gamma\left(\dfrac{a}{2}\right)}{\Gamma\left(\dfrac{1-a}{2}\right)}\cdot\dfrac{\Gamma\left(\dfrac{b}{2}\right)}{\Gamma\left(\dfrac{1-b}{2}\right)}\cdot\dfrac{\Gamma\left(\dfrac{c}{2}\right)}{\Gamma\left(\dfrac{1-c}{2}\right)}$
for $a+b+c=1$. This is symmetric in $a$, $b$, $c$, which means we are way less likely to go insane during the following computations.
Now, using the formula
$\dfrac{\Gamma\left(\dfrac{s}{2}\right)}{\Gamma\left(\dfrac{1-s}{2}\right)}=\pi^{-1/2}2^{1-s}\cdot\cos\dfrac{\pi s}{2}\cdot\Gamma\left(s\right)$,
the right hand side simplifies to
$\pi^{-1}\cdot 4\cdot\cos\dfrac{\pi a}{2}\cdot\Gamma\left(a\right)\cdot\cos\dfrac{\pi b}{2}\cdot\Gamma\left(b\right)\cdot\cos\dfrac{\pi c}{2}\cdot\Gamma\left(c\right)$
(here we used $2^{3-a-b-c}=2^{3-1}=4$), so the identity in question becomes
$\dfrac{\Gamma\left(b\right)\Gamma\left(c\right)}{\Gamma\left(b+c\right)}+\dfrac{\Gamma\left(c\right)\Gamma\left(a\right)}{\Gamma\left(c+a\right)}+\dfrac{\Gamma\left(a\right)\Gamma\left(b\right)}{\Gamma\left(a+b\right)}$
$ = \pi^{-1}\cdot 4\cdot\cos\dfrac{\pi a}{2}\cdot\Gamma\left(a\right)\cdot\cos\dfrac{\pi b}{2}\cdot\Gamma\left(b\right)\cdot\cos\dfrac{\pi c}{2}\cdot\Gamma\left(c\right)$.
Dividing by $\Gamma\left(a\right)\Gamma\left(b\right)\Gamma\left(c\right)$ on both sides, we get
$\dfrac{1}{\Gamma\left(a\right)\Gamma\left(b+c\right)}+\dfrac{1}{\Gamma\left(b\right)\Gamma\left(c+a\right)}+\dfrac{1}{\Gamma\left(c\right)\Gamma\left(a+b\right)}=4\pi^{-1}\cdot\cos\dfrac{\pi a}{2}\cdot\cos\dfrac{\pi b}{2}\cdot\cos\dfrac{\pi c}{2}$.
Since $b+c=1-a$, $c+a=1-b$, $a+b=1-c$, this rewrites as
$\dfrac{1}{\Gamma\left(a\right)\Gamma\left(1-a\right)}+\dfrac{1}{\Gamma\left(b\right)\Gamma\left(1-b\right)}+\dfrac{1}{\Gamma\left(c\right)\Gamma\left(1-c\right)}=4\pi^{-1}\cdot\cos\dfrac{\pi a}{2}\cdot\cos\dfrac{\pi b}{2}\cdot\cos\dfrac{\pi c}{2}$.
Now, using the formula $\dfrac{1}{\Gamma\left(z\right)\Gamma\left(1-z\right)}=\pi^{-1}\sin{\pi z}$ on the left hand side, and dividing by $\pi^{-1}$, we can simplify this to
$\sin{\pi a}+\sin{\pi b}+\sin{\pi c}=4\cdot\cos\dfrac{\pi a}{2}\cdot\cos\dfrac{\pi b}{2}\cdot\cos\dfrac{\pi c}{2}$.
Since $a+b+c=1$, we can set $A=\pi a$, $B=\pi b$, $C=\pi c$ and then have $A+B+C=\pi$. Our goal is to show that
$\sin A+\sin B+\sin C=4\cdot\cos\dfrac{A}2\cdot\cos\dfrac{B}2\cdot\cos\dfrac{C}{2}$
for any three angles $A$, $B$, $C$ satisfying $A+B+C=\pi$.
Now this can be proven in different ways:
1) One is by writing $C=\pi-A-B$ and simplifying using trigonometric formulae; this is rather boring and it breaks the symmetry.
2) Another one is using complex numbers: let $\alpha=e^{iA/2}$, $\beta=e^{iB/2}$ and $\gamma=e^{iC/2}$. Then,
$\sin A+\sin B+\sin C=4\cdot\cos\dfrac{A}2\cdot\cos\dfrac{B}2\cdot\cos\dfrac{C}{2}$
becomes
(1) $\dfrac{\alpha^2-\alpha^{-2}}{2i}+\dfrac{\beta^2-\beta^{-2}}{2i}+\dfrac{\gamma^2-\gamma^{-2}}{2i} = 4\cdot\dfrac{\alpha+\alpha^{-1}}{2}\cdot\dfrac{\beta+\beta^{-1}}{2}\cdot\dfrac{\gamma+\gamma^{-1}}{2}$.
Oh, and $A+B+C=\pi$ becomes $\alpha\beta\gamma=2i$. Now proving (1) is just a matter of multiplying out the right hand side and looking at the $8$ terms (two of them, namely $\alpha\beta\gamma$ and $\alpha^{-1}\beta^{-1}\gamma^{-1}$, cancel out, being $i$ and $-i$, respectively).
3) Here is how I would have done it 8 years ago: We can WLOG assume that $A$, $B$, $C$ are the angles of a triangle (this means that $A$, $B$, $C$ lie in the interval $\left[0,\pi\right]$, additionally to satisfying $A+B+C=\pi$), because everything is analytic (or by casebash). We denote the sides of this triangle by $a$, $b$, $c$ (so we forget about the old $a$, $b$, $c$), its semiperimeter $\dfrac{a+b+c}{2}$ by $s$, its area by $\Delta$ and its circumradius by $R$. Then, $\sin A=\dfrac{a}{2R}$ (by the Extended Law of Sines) and similarly $\sin B=\dfrac{b}{2R}$ and $\sin C=\dfrac{c}{2R}$, so that $\sin A+\sin B+\sin C=\dfrac{a}{2R}+\dfrac{b}{2R}+\dfrac{c}{2R}=\dfrac{a+b+c}{2R}=\dfrac{s}{R}$. On the other hand, one of the half-angle formulas shows that $\cos\dfrac{A}2=\sqrt{\dfrac{s\left(s-a\right)}{bc}}$, and similar formulas hold for $\cos\dfrac{B}2$ and $\cos\dfrac{C}2$, so that
$4\cdot\cos\dfrac{A}2\cdot\cos\dfrac{B}2\cdot\cos\dfrac{C}{2}$
$=4\cdot\sqrt{\dfrac{s\left(s-a\right)}{bc}}\cdot\sqrt{\dfrac{s\left(s-b\right)}{ca}}\cdot\sqrt{\dfrac{s\left(s-c\right)}{ab}}$
$=\dfrac{4s}{abc}\sqrt{s\left(s-a\right)\left(s-b\right)\left(s-c\right)}$.
Now, $\sqrt{s\left(s-a\right)\left(s-b\right)\left(s-c\right)}=\Delta$ (by Heron's formula) and $\Delta=\dfrac{abc}{4R}$ (by another formula for the area of the triangle), so tis becomes
$4\cdot\cos\dfrac{A}2\cdot\cos\dfrac{B}2\cdot\cos\dfrac{C}{2}=\dfrac{4s}{abc}\cdot\dfrac{abc}{4R}=\dfrac{s}{R}$.
This is exactly what we got for the left hand side, qed.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/39688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 6,
"answer_id": 3
} |
Any sum of 2 dice with equal probability The question is the following: Can one create two nonidentical loaded 6-sided dice such that when one throws with both dice and sums their values the probability of any sum (from 2 to 12) is the same. I said nonidentical because its easy to verify that with identical loaded dice its not possible.
Formally: Let's say that $q_{i}$ is the probability that we throw $i$ on the first die and $p_{i}$ is the same for the second die. $p_{i},q_{i} \in [0,1]$ for all $i \in 1\ldots 6$. The question is that with these constraints are there $q_{i}$s and $p_{i}$s that satisfy the following equations:
$ q_{1} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{2} + q_{2} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{3} + q_{2} \cdot p_{2} + q_{3} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{4} + q_{2} \cdot p_{3} + q_{3} \cdot p_{2} + q_{4} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{5} + q_{2} \cdot p_{4} + q_{3} \cdot p_{3} + q_{4} \cdot p_{2} + q_{5} \cdot p_{1} = \frac{1}{11}$
$ q_{1} \cdot p_{6} + q_{2} \cdot p_{5} + q_{3} \cdot p_{4} + q_{4} \cdot p_{3} + q_{5} \cdot p_{2} + q_{6} \cdot p_{1} = \frac{1}{11}$
$ q_{2} \cdot p_{6} + q_{3} \cdot p_{5} + q_{4} \cdot p_{4} + q_{5} \cdot p_{3} + q_{6} \cdot p_{2} = \frac{1}{11}$
$ q_{3} \cdot p_{6} + q_{4} \cdot p_{5} + q_{5} \cdot p_{4} + q_{6} \cdot p_{3} = \frac{1}{11}$
$ q_{4} \cdot p_{6} + q_{5} \cdot p_{5} + q_{6} \cdot p_{4} = \frac{1}{11}$
$ q_{5} \cdot p_{6} + q_{6} \cdot p_{5} = \frac{1}{11}$
$ q_{6} \cdot p_{6} = \frac{1}{11}$
I don't really now how to start with this. Any suggestions are welcome.
| Here is an alternate solution, which I ran across while looking through Jim Pitman's undergraduate probability text. (It's problem 3.1.19.)
Let $S$ be the sum of numbers obtained by rolling two dice,, and assume $P(S=2)=P(S=12) = 1/11$. Then
$P(S=7) \ge p_1 q_6 + p_6 q_1 = P(S=2) {q_6 \over q_1} + P(S=12) {q_1 \over q_6}$
and so $P(S=7) \ge 1/11 (q_1/q_6 + q_6/q_1)$. The second factor here is at least two, so $P(S=7) \ge 2/11$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/41310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 6,
"answer_id": 0
} |
Special arithmetic progressions involving perfect squares Prove that there are infinitely many positive integers $a$, $b$, $c$ that are consecutive terms of an arithmetic progression and also satisfy the condition that $ab+1$, $bc+1$, $ca+1$ are all perfect squares.
I believe this can be done using Pell's equation. What is interesting however is that the following result for four numbers apparently holds:
Claim. There are no positive integers $a$, $b$, $c$, $d$ that are consecutive terms of an arithmetic progression and also satisfy the condition that $ab+1$, $ac+1$, $ad+1$, $bc+1$, $bd+1$, $cd+1$ are all perfect squares.
I am curious to see if there is any (decent) solution.
Thanks.
| Starting from the equations in my previous answer, we get, by multiplying them in pairs,
$$(x-y)x(x+y)(x+2y) + (x-y)x + (x+y)(x+2y) + 1 = (z_1 z_6)^2\,,$$
$$(x-y)x(x+y)(x+2y) + (x-y)(x+y) + x(x+2y) + 1 = (z_2 z_5)^2\,,$$
$$(x-y)x(x+y)(x+2y) + (x-y)(x+2y) + x(x+y) + 1 = (z_3 z_4)^2\,.$$
Write $u = z_1 z_6$, $v = z_2 z_5$, $w = z_3 z_4$ and take differences to obtain
$$3 y^2 = u^2 - v^2 \qquad\text{and}\qquad y^2 = v^2 - w^2\,.$$
The variety $C$ in ${\mathbb P}^3$ described by these two equations is a smooth curve of genus 1 whose Jacobian elliptic curve is 24a1 in the Cremona database; this elliptic curve has rank zero and a torsion group of order 8. This implies that $C$ has exactly 8 rational points; up to signs they are given by $(u:v:w:y) = (1:1:1:0)$ and $(2:1:0:1)$. So $y = 0$ or $w = 0$. In the first case, we do not have an honest AP ($y$ is the difference). In the second case, we get the contradiction $abcd + ad + bc + 1 = 0$ ($a,b,c,d$ are supposed to be positive). So unless I have made a mistake somewhere, this proves that there are no such APs of length 4.
Addition: We can apply this to rational points on the surface. The case $y = 0$ gives a bunch of conics of the form
$$x^2 + 1 = z_1^2, \quad z_2 = \pm z_1, \quad \dots, \quad z_6 = \pm z_1\,;$$
the case $w = 0$ leads to $ad = -1$ or $bc = -1$. The second of these gives $ad + 1 < 0$, and the first gives $ac + 1 = (a^2 + 1)/3$, which cannot be a square. This shows that all the rational points are on the conics mentioned above; in particular, (weak) Bombieri-Lang holds for this surface.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/88220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 5
} |
Form of even perfect numbers From the list of even perfect numbers
http://en.wikipedia.org/wiki/List_of_perfect_numbers
it can be observed that all of them have either 6 or 8 as a last digit.
Is this true for all even perfect numbers?
In other words, does one of the congruences
$$n\equiv 1 \ (\text{mod 5}), \quad n\equiv 3 \ (\text{mod 5})$$
hold for any even perfect number n? I suppose there are results of this kind but couldn't find any.
| Every even perfect number is of the form $2^{p-1}(2^p - 1)$ where $2^p - 1$ is a (Mersenne) prime. Note that $p$ must be prime – if $p = ab$ with $a, b > 1$ then
$$2^p - 1 = 2^{ab} - 1 = (2^a)^b - 1 = (2^a - 1)(1 + 2^a + 2^{2a} + \dots\ + 2^{a(b-1)}).$$
If $p = 2$, we obtain the first perfect number $6$ which satisfies $ 6 \equiv 1\ (\text{mod 5})$. Every other prime is odd, so let $p = 2k + 1$. Then
$$2^{p-1}(2^p - 1) = 2^{2k}(2^{2k+1} - 1) = 2.2^{4k} - 2^{2k} = 2.16^k - 4^k \equiv 2 - (-1)^k\ (\text{mod 5}).$$
So, for $p = 2k + 1$,
$$2^{p-1}(2^p - 1) \equiv \begin{cases}
1 \ (\text{mod 5}) & \text{if }k\text{ is even}\newline
3 \ (\text{mod 5}) & \text{if }k\text{ is odd}.
\end{cases}$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/99787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Lambert series identity Can someone give me a short proof of the identity,
$$\sum_{n=1}^\infty\frac{q^nx^{n^2}}{1-qx^{n}}+\sum_{n=1}^\infty\frac{q^nx^{n(n+1)}}{1-x^n}=\sum_{n=1}^\infty\frac{q^nx^n}{1-x^n}$$
| It's just about using geometric series a lot. Indeed, we have
$\sum\limits_{n=1}^\infty\frac{q^n x^{n^2}}{1-qx^n}=\sum\limits_{n=1}^\infty\sum\limits_{k=0}^\infty q^{n+k}x^{n(n+k)}=\sum\limits_{m=1}^\infty\sum\limits_{d\ge m}q^dx^{md}$
and
$\sum\limits_{n=1}^\infty\frac{q^n x^{n(n+1)}}{1-x^n}=\sum\limits_{n=1}^\infty\sum\limits_{k=0}^\infty q^{n}x^{n(n+1+k)}=\sum\limits_{m=1}^\infty\sum\limits_{1\le d<m}q^dx^{md}$.
Therefore,
$\sum\limits_{n=1}^\infty\frac{q^n x^{n^2}}{1-qx^n}+\sum\limits_{n=1}^\infty\frac{q^n x^{n(n+1)}}{1-x^n}=
\sum\limits_{m=1}^\infty\sum\limits_{d\ge m}q^dx^{md}+\sum\limits_{m=1}^\infty\sum\limits_{1\le d<m}q^dx^{md}=
\sum\limits_{m=1}^\infty\sum\limits_{d=1}^\infty q^dx^{md}$.
However,
$\sum\limits_{n=1}^\infty\frac{q^n x^{n}}{1-x^n}= \sum\limits_{m=1}^\infty\sum\limits_{d=1}^\infty q^dx^{md}$ as well, and we are done.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/140418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
A published proof for: the number of labeled $i$-edge ($i \geq 1$) forests on $p^k$ vertices is divisible by $p^k$ Let $F(n;i)$ be the number of labeled $i$-edge forests on $n$ vertices (A138464 on the OEIS). The first few values of $F(n;i) \pmod n$ are listed below:
$$\begin{array}{r|rrrrrrrrrrr}
& i=0 & 1 & 2 & 3 & 4 & 6 & 7 & 8 & 9 & 10 & 11 \\
\hline
n=2 & 1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
3 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
4 & 1 & 2 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
5 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
6 & 1 & 3 & 3 & 3 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
7 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
8 & 1 & 4 & 2 & 4 & 1 & 0 & 0 & 0 & 0 & 0 & 0 \\
9 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
10 & 1 & 5 & 0 & 0 & 5 & 5 & 0 & 0 & 0 & 0 & 0 \\
11 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\
\end{array}$$
We see that if $n$ is an odd prime power and $i \geq 1$, then $n$ divides $F(n;i)$. I can prove this via group actions and induction.
Question: Is there is a published proof of this result?
(Or, alternatively, a succinct proof of this result.)
| The proof is now Lemma 2 here:
A. P. Mani, R. J. Stones, Congruences for the weighted number of labeled forests. Integers, 16 (2016): A17.
which is freely available from: http://www.integers-ejcnt.org/vol16.html
| {
"language": "en",
"url": "https://mathoverflow.net/questions/160399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
solutions to special diophantine equations Let $0\le x,y,z,u,v,w\le n$ be integer numbers obeying
\begin{align*}
x^2+y^2+z^2=&u^2+v^2+w^2\\
x+y+v=&u+w+z\\
x\neq& w
\end{align*}
(Please note that the second equality is $x+y+v=u+w+z$ NOT $x+y+z=u+v+w$. This has lead to some mistakes in some of the answers below)
How can the solutions to the above equations be characterized. One class of solutions is as follows
\begin{align*}
x=&u\\
y=&w\\
z=&v
\end{align*}
| For the system of equations:
$$\left\{\begin{aligned}&x^2+y^2+z^2=u^2+w^2+v^2\\&x+y+v=u+w+z\end{aligned}\right.$$
You can record solutions:
$$x=s^2+kt-ks+kq+ts-tq-qs$$
$$y=s^2+kt+ks-kq-ts+tq-qs$$
$$z=s^2+kt+ks-kq+ts-tq+2q^2-3qs$$
$$u=s^2+kt+ks-kq+ts-tq-qs$$
$$w=kt-s^2-ks+kq-ts+tq-2q^2+3qs$$
$$v=kt-s^2+ks-kq+ts-tq+qs$$
$s,q,k,t$ - integers asked us.
For not a lot of other systems of equations:
$$\left\{\begin{aligned}&x^2+y^2+z^2=u^2+w^2+v^2\\&x+y+z=u+w+v\end{aligned}\right.$$
Solutions have the form:
$$x=2s^2+(2q+2t+k)s+kt+qk+2qt$$
$$y=s^2+(q+k)s+qk+kt-t^2$$
$$z=s^2+(t+k)s+qk+kt-q^2$$
$$u=s^2+(q+t+k)s+qt+qk+kt$$
$$w=2s^2+(2q+2t+k)s+qt+qk+kt$$
$$v=s^2+ks+qk+kt-q^2-t^2$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/184202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Generic polynomial for alternating group ${A}_{4}$ is not correct I was validating the percentage of cases where the generic two parameter polynomial for Galois group ${A}_{4}$ is valid. We have
\begin{equation*}
{f}^{{A}_{4}} \left({x, \alpha, \beta}\right) = {x}^{4} - \frac{6\, A}{B}\, {x}^{3} - 8\, x + \frac{1}{{B}^{2}} \left({9\, {A}^{2} - 12 \left({{\alpha}^{3} - {\beta}^{3} + 27}\right) B}\right) \in K \left({\alpha, \beta}\right) \left[{x}\right]
\end{equation*}
where
\begin{equation*}
A = {\alpha}^{3} - {\beta}^{3} - 9\, {\beta}^{2} - 27\, \beta - 54
\end{equation*}
and
\begin{equation*}
B = {\alpha}^{3} - 3\, \alpha\, {\beta}^{2} + 2\, {\beta}^{3} - 9\, \alpha\, \beta + 9\, {\beta}^{2} - 27 \left({\alpha - \beta - 1}\right).
\end{equation*}
from Arne Ledet "Constructing Generic Polynomials", Proceedings of the Workshop on Number Theory, Institute of Mathematics, Waseda University, Tokyo, 2001. When testing for $- 100 \le \alpha, \beta \le + 100$ we have 99.990% of the irreducible cases belonging to the ${S}_{4}$ group and 0.005% of the remaining cases belonging to the ${A}_{4}$ and ${D}_{4}$ groups, respectively.
What is the correction if known and are there other two parameter cases known for the ${A}_{4}$ group? I do have from Gene Smith ("Some Polynomials over $\mathbb{Q} \left({t}\right)$ and their Galois groups", Mathematics of Computation, 69(230):775-796, August 1999.) the example ${f}^{{A}_{4}} \left({x, t}\right) =
{x}^{4} + 18\, t\, {x}^{3} + \left({81\, {t}^{2} + 2}\right)\, {x}^{2} + 2\, t \left({54\, {t}^{2} + 1}\right) x + 1 \in
K \left({t}\right) \left[{x}\right]$ which is valid and I encountered a five parameter case which I have not yet tested. I have validated the other common examples for ${S}_{4}$, ${V}_{4}$, ${D}_{4}$, and ${C}_{4}$.
| The Galois group is a subgroup of $A^4$ if and only if the discriminant is a perfect square. If you change $x^3$ to $x^2$ you get the discriminant to be:
$$\frac{1728^2 \left(b^2+3 b+9\right)^2 \left(a^3 (2 b+3)-3 a^2 \left(b^2+3 b+9\right)+\left(b^2+3 b+9\right)^2\right)^2}{\left(a^3-3 a \left(b^2+3
b+9\right)+2 b^3+9 b^2+27 b+27\right)^4},$$
so that definitely works. The discriminant is definitely NOT a square in the original version.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/220739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Definite intergal with two K-Bessel functions and x I would like to calculate the definite integral with K-Bessel funcitons and a and b complex (n and k integers):
$$\int_{0}^{\infty} x \;K_{a}(nx) \; K_{b}(kx) \; dx$$
I could not find it in litterature with a and b complex (We have $Re(a)<1$ and $Re(b)<1$ for convergence in zero!).
Any reference or help on this subject is welcome.
| Mathematica:
$$\int_{0}^{\infty} x \;K_{a}(nx) \; K_{b}(kx) \; dx=\frac{1}{2n^2}(k n)^{-b} $$
$$\qquad\times \left[n^{2 b} \Gamma (b) \Gamma \left(-\frac{a}{2}-\frac{b}{2}+1\right) \Gamma \left(\tfrac{1}{2} (a-b+2)\right) \, _2F_1\left(\tfrac{1}{2} (-a-b+2),\tfrac{1}{2} (a-b+2);1-b;\frac{k^2}{n^2}\right)\right.$$
$$\qquad\left.+\,k^{2 b} \Gamma (-b) \Gamma \left(\tfrac{1}{2} (-a+b+2)\right) \Gamma \left(\tfrac{1}{2} (a+b+2)\right) \, _2F_1\left(\tfrac{1}{2} (-a+b+2),\tfrac{1}{2} (a+b+2);b+1;\frac{k^2}{n^2}\right)\right]$$
for $-2<{\rm Re}(a-b)<2$, $-2<{\rm Re}(a+b)<2$
if $k=n$ this simplifies to
$$\int_{0}^{\infty} x \;K_{a}(nx) \; K_{b}(nx)=\frac{\pi ^2 (a^2-b^2)}{4 n^2 (\cos \pi b-\cos \pi a)}$$
there do not seem to be further simplifications if $k\neq n$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/235199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Pairs of matrices Consider two matrices $A, B\in\mathcal{M}_n(\mathbb{C})$, such that $A, B$ has no common eigenvectors. Is it true that for some nonzero $t\in\mathbb{C}$, matrix $A+tB$ is similar to diagonal matrix $\text{diag}(a_1, a_2, \ldots, a_n)$, where $a_i\not = a_j, \forall i\not= j$?
| It's false for every $n\ge 3$.
Consider $A_n,B_n$ $n\times n$ square matrices with no common eigenvalue (these exist for $n=0$ and $n\ge 2$). Then
$$A=\begin{pmatrix} A_2 & 0 & 0\\ 0 & A_2 & 0\\ 0 & 0 & A_{n-4}\end{pmatrix},\qquad B=\begin{pmatrix} B_2 & 0 & 0\\ 0 & B_2 & 0\\ 0 & 0 & B_{n-4}\end{pmatrix}$$
work for $n=4$ and $n\ge 6$.
It remains to do $n=3,5$. Define
$$N=\begin{pmatrix} 0 & 0 & 0\\ 1 & 0 & 0\\ 0 &1& 0\end{pmatrix},\qquad N'=\begin{pmatrix} 0 & 1 & 0\\ 0 & 0 & -1\\ 0 &0& 0\end{pmatrix}$$
Then $N$ and $N'$ have no common eigenvector but the plane spanned by $N,N'$ consists of nilpotent matrices.
Then the matrices
$$A=\begin{pmatrix} N & 0\\ 0 & A_{n-3}\end{pmatrix},\qquad B=\begin{pmatrix} N' & 0\\ 0 & B_{n-3}\end{pmatrix}$$
work for all $n=3$ and $n\ge 5$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/253249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partial sum of Hypergeometric2F1 function Can we get a closed form for the following series:
$$\sum\limits_{x=1}^c \dfrac{(x+c-1)!}{x!} {}_2F_1(x+c,x,x+1,z)$$
where $c$ is a positive integer and $z$ is a real number less than -1?
Any suggestions or hints are appreciated.
| $$\sum\limits_{x=1}^c \dfrac{(x+c-1)!}{x!} {}_2F_1(x+c,x,x+1,z)=\frac{(c-1)!}{(z-1)^{2c-1}}P_c(z),$$
where $P_c(z)$ is a polynomial in $z$ of degree $2c-2$, the first few are
$$P_1(z)=1$$
$$P_2(z)=z^2-4 z+5$$
$$P_3(z)=z^4-6 z^3+16 z^2-24 z+19$$
$$P_4(z)=z^6-8 z^5+29 z^4-64 z^3+97 z^2-104 z+69$$
$$P_5(z)=z^8-10 z^7+46 z^6-130 z^5+256 z^4-380 z^3+446 z^2-410 z+251$$
$$P_6(z)=z^{10}-12 z^9+67 z^8-232 z^7+562 z^6-1024 z^5+1484 z^4-1792 z^3+1847 z^2-1572 z+923$$
$$P_7(z)=z^{12}-14 z^{11}+92 z^{10}-378 z^9+1093 z^8-2380 z^7+4096 z^6-5810 z^5+7071 z^4-7630 z^3+7344 z^2-5992 z+3431$$
these polynomials may well have appeared before...
| {
"language": "en",
"url": "https://mathoverflow.net/questions/261433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
De Bruijn tori in higher dimensions?
Q. Do there exist De Bruijn tori in dimension $d > 2$?
A De Bruijn torus
is a two-dimensional generalization of a
De Bruijn sequence.
A De Bruijn sequence is, for two symbols,
a cyclical bit-string that contains all bit strings of length $n$
as consecutive, left-to-right bits (with wrap-around).
For example, here is a sequence of $8$ bits that contains all $2^3$ bit
strings of length $3$:
$$
\begin{matrix}
\color{red}{0} & \color{red}{0} & \color{red}{0} & 1 & 1 & 1 & 0 & 1\\
0 & \color{red}{0} & \color{red}{0} & \color{red}{1} & 1 & 1 & 0 & 1\\
\color{red}{0} & 0 & 0 & 1 & 1 & 1 & \color{red}{0} & \color{red}{1}\\
0 & 0 & \color{red}{0} & \color{red}{1} & \color{red}{1} & 1 & 0 & 1\\
\color{red}{0} & \color{red}{0} & 0 & 1 & 1 & 1 & 0 & \color{red}{1}\\
0 & 0 & 0 & 1 & 1 & \color{red}{1} & \color{red}{0} & \color{red}{1}\\
0 & 0 & 0 & 1 & \color{red}{1} & \color{red}{1} & \color{red}{0} & 1\\
0 & 0 & 0 & \color{red}{1} & \color{red}{1} & \color{red}{1} & 0 & 1
\end{matrix}
$$
Here is a De Bruijn torus that includes all $2 \times 2$ bit-matrices
exactly once (from here):
$$
\begin{matrix}
0 & 1 & 0 & 0 \\
0 & 1 & 1 & 1 \\
1 & 1 & 1 & 0 \\
0 & 0 & 1 & 0
\end{matrix}
$$
A $4 \times 4$ De Bruijn torus has been explicitly constructed.
My question is: Is it known that there exist
De Bruijn tori in dimensions larger than $d=2$?
Perhaps for every dimension?
For example, a three-dimensional pattern of bits that includes
every $k \times k \times k$ bit-(hyper)matrix?
| Yes. See "New constructions for de Bruijn tori" by Hurlbert and Isaak
| {
"language": "en",
"url": "https://mathoverflow.net/questions/267329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
An identity for product of central binomials This "innocent-looking" identity came out of some calculation with determinants, and I like to inquire if one can provide a proof. Actually, different methods of proofs would be of valuable merit and instructional.
Question. Can you justify the following identity?
$$\prod_{j=1}^n\binom{2j}j=\prod_{j=1}^n2\binom{n+j}{2j}.$$
| I think that this is routine to verify by induction (as Fedor Petrov suggests in a comment). With respect to a response that is "instructional," suppose (or check) that the identity holds when $n = 4$. As one moves up to $n=5$, the right hand side changes from:
$$2\binom{5}{2}2\binom{6}{4}2\binom{7}{6}2\binom{8}{8}$$
to:
$$2\binom{6}{2}2\binom{7}{4}2\binom{8}{6}2\binom{9}{8}2\binom{10}{10} = 2\binom{6}{2}2\binom{7}{4}2\binom{8}{6}2\binom{9}{8}2$$
Using the fact that $\binom{n}{k} / \binom{n-1}{k} = \frac{n}{n-k}$, we find the ratio of the RHS for $n=5$ to the RHS when $n=4$ to be:
$$(6/4)(7/3)(8/2)(9/1)2 = 2\frac{9!}{5!4!}$$
Meanwhile, the ratio of the LHS when $n=4$ to $n=5$ changes by a multiplicative factor of
$$\binom{10}{5} = \frac{10!}{5!5!} = \frac{10}{5}\frac{9!}{5!4!} = 2\frac{9!}{5!4!}$$
as above.
I believe the general proof simply requires a bit of attention paid to bookkeeping around $n$ and $k$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/269707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Combinatorial identity: $\sum_{i,j \ge 0} \binom{i+j}{i}^2 \binom{(a-i)+(b-j)}{a-i}^2=\frac{1}{2} \binom{(2a+1)+(2b+1)}{2a+1}$ In my research, I found this identity and as I experienced, it's surely right. But I can't give a proof for it.
Could someone help me?
This is the identity:
let $a$ and $b$ be two positive integers; then:
$\sum_{i,j \ge 0} \binom{i+j}{i}^2 \binom{(a-i)+(b-j)}{a-i}^2=\frac{1}{2} \binom{(2a+1)+(2b+1)}{2a+1}$.
| Denote $h(x,y)=\sum_{i,j\geqslant 0} \binom{i+j}i x^iy^j=\frac1{1-(x+y)}$, $f(x,y)=\sum_{i,j\geqslant 0} \binom{i+j}i^2 x^iy^j$. We want to prove that $2xyf^2(x^2,y^2)$ is an odd (both in $x$ and in $y$) part of the function $h(x,y)$. In other words, we want to prove that $$2xyf^2(x^2,y^2)=\frac14\left(h(x,y)+h(-x,-y)-h(x,-y)-h(-x,y)\right)=\frac{2xy}{1-2(x^2+y^2)+(x^2-y^2)^2}.$$
So, our identity rewrites as $$f(x,y)=(1-2(x+y)+(x-y)^2)^{-1/2}=:f_0(x,y)$$
This is true for $x=0$, both parts become equal to $1/(1-y)$. Next, we find a differential equation in $x$ satisfied by the function $f_0$. It is not a big deal: $$\left(f_0(1-2(x+y)+(x-y)^2)\right)'_x=(x-y-1)f_0.$$
Since the initial value $f_0(0,y)$ and this relation uniquely determine the function $f_0$, it remains to check that this holds for $f(x,y)$, which is a straightforward identity with several binomials. Namely, comparing the coefficients of $x^{i-1}y^j$ we get
$$ i\left(\binom{i+j}j^2-2\binom{i+j-1}j^2-2\binom{i+j-1}i^2+\binom{i+j-2}i^2+\binom{i+j-2}j^2-2\binom{i+j-2}{i-1}^2\right)
$$
for $(f(1-2(x+y)+(x-y)^2))'_x$ and $$\binom{i+j-2}j^2-\binom{i+j-1}j^2-\binom{i+j-2}{j-1}^2$$
for $(x-y-1)f$. Both guys are equal to $$-2\frac{j}{i+j-1}\binom{i+j-1}{j}^2.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/283540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "30",
"answer_count": 3,
"answer_id": 0
} |
Strengthened version of Isoperimetric inequality with n-polygon Let $ABCD$ be a convex quadrilateral with the lengths $a, b, c, d$ and the area $S$. The main result in our paper equivalent to:
\begin{equation} a^2+b^2+c^2+d^2 \ge 4S + \frac{\sqrt{3}-1}{\sqrt{3}}\sum{(a-b)^2}\end{equation}
where $\sum{(a-b)^2}=(a-b)^2+(a-c)^2+(a-d)^2+(b-c)^2+(b-d)^2+(c-d)^2$ and note that $\frac{\sqrt{3}-1}{\sqrt{3}}$ is the best constant for this inequality.
Similarly with the same form above apply to n-polygon, I propose a conjecture that:
Let a convex polygon $A_1A_2...A_n$ with the lengths are $a_1, a_2, ...a_n$ and area $S$ we have:
\begin{equation} \sum_i^n{a_i^2} \ge 4\tan{\frac{\pi}{n}}S + k\sum_{i < j}{(a_i-a_j)^2}\end{equation}
I guess that $k=tan{\frac{\pi}{n}}-tan{\frac{\pi}{n+2}}$
I am looking for a proof of the inequality above.
Note that using Isoperimetric inequality we can prove that:
\begin{equation} \sum_i^n{a_i^2} \ge 4\tan{\frac{\pi}{n}}S\end{equation}
Case n=3:
*
*Hadwiger–Finsler inequality, $k=1 \approx 1.00550827956352=tan{\frac{\pi}{3}}-tan{\frac{\pi}{5}}$:
\begin{equation}a^{2} + b^{2} + c^{2} \geq 4 \sqrt{3}S+ (a - b)^{2} + (b - c)^{2} + (c - a)^{2}\end{equation}
Case n=4: our paper, $k=\frac{\sqrt{3}-1}{\sqrt{3}}$, $k=tan{\frac{\pi}{4}}-tan{\frac{\pi}{6}}$
\begin{equation} a^2+b^2+c^2+d^2 \ge 4S + \frac{\sqrt{3}-1}{\sqrt{3}}\sum{(a-b)^2}\end{equation}
See also:
*
*Weitzenböck's inequality
*Hadwiger–Finsler inequality
*Isoperimetric inequality.
| The conjectured inequality, with $k=\tan{\frac{\pi}{n}}-\tan{\frac{\pi}{n+2}}$, is false for $n=3$. More specifically, the constant factor $k=1$ is optimal in the Hadwiger--Finsler inequality: e.g., consider $a=b=1$ and $c\approx0$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/299056",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Block matrices and their determinants For $n\in\Bbb{N}$, define three matrices $A_n(x,y), B_n$ and $M_n$ as follows:
(a) the $n\times n$ tridiagonal matrix $A_n(x,y)$ with main diagonal all $y$'s, superdiagonal all $x$'s and subdiagonal all $-x$'s. For example,
$$ A_4(x,y)=\begin{pmatrix} y&x&0&0\\-x&y&x&0\\0&-x&y&x
\\0&0&-x&y\end{pmatrix}. $$
(b) the $n\times n$ antidigonal matrix $B_n$ consisting of all $1$'s. For example,
$$B_4=\begin{pmatrix} 0&0&0&1\\0&0&1&0\\0&1&0&0\\1&0&0&0\end{pmatrix}.$$
(c) the $n^2\times n^2$ block-matrix $M_n=A_n(B_n,A_n(1,1))$ or using the Kronecker product $M_n=A_n(1,0)\otimes B_n+I_n\otimes A_n(1,1)$.
Question. What is the determinant of $M_n$?
UPDATE. For even indices, I conjecture that
$$\det(M_{2n})=\prod_{j,k=1}^n\left[1+4\cos^2\left(\frac{j\pi}{2n+1}\right)+4\cos^2\left(\frac{k\pi}{2n+1}\right)\right]^2.$$
This would confirm what Philipp Lampe's "perfect square" claim.
| Flip the order of the Kronecker products to get $M'=A_n(I_n,I_n)+B_n\otimes T_n$, where $T_n=A_n(1,0)$. Note that $\det M=\det M'$. Since all blocks are polynomial in $A$, they commute, and therefore the determinant of $M'$ is $\det(f(T_n))$, where $f(x)=\det(A_n(1,1)+xB_n)$. That is, $f(x)=\det(B_n)\det(x+A_n(1,1)B_n)$ is $(-1)^{n\choose 2}$ times the characteristic polynomial of $H=-A_n(1,1)B_n$.
Let $t_n(x)$ be the characteristic polynomial of $T_n$. By repeated cofactor expansion on the first row, $t_n(x)=xt_{n-1}(x)+t_{n-2}(x)$. The initial conditions then imply that the $t_n$ is the $(n+1)^{th}$ Fibonacci polynomial. The roots of $t_n$ are $2i\cos(k\pi/(n+1))$ for $k=1,\dots,n$.
The eigenvalues of $H$ are worked out in "The eigenvalues of some anti-tridiagonal Hankel matrices". When $n$ is odd they are $1$ and $\pm\sqrt{3+2\cos(\frac{2k\pi}{n+1})}$ for $k=1,\dots,\frac{n-1}{2}$. When $n$ is even they are $\pm\sqrt{1+4\cos^2(\frac{(2k+1)\pi}{n+1})}$ for $k=0,\dots,\frac{n}{2}-1$.
By a quick diagonalization argument $\det(f(T_n))$ is the resultant of $f$ and $t_n$. This plus some trig gives
$$
\det(M_{2n})=\prod_{j,k=1}^n\left[1+4\cos^2\left(\frac{j\pi}{2n+1}\right)+4\cos^2\left(\frac{k\pi}{2n+1}\right)\right]^2
$$
and
$$
\det(M_{2n-1})=\prod_{j=1}^{n-1}\left[1+4\cos^2\left(\frac{j\pi}{2n}\right)\right]^2\prod_{k=1}^{n-1}\left[1+4\cos^2\left(\frac{j\pi}{2n}\right)+4\cos^2\left(\frac{k\pi}{2n}\right)\right]^2.
$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/310610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Prove that a certain integration yields the value $\frac{7}{9}$ Numerical methods surely indicate that $\int_0^{\frac{1}{3}} 2 \sqrt{9 x+1} \sqrt{21 x-4 \sqrt{3} \sqrt{x (9 x+1)}+1} \left(4 \sqrt{3} \sqrt{x (9
x+1)}+1\right) \, dx= \frac{7}{9}$.
Can this be formally demonstrated?
| Well, pursuing (as best I can) the line of reasoning put forth by K B Dave in his (stereographic-projection-motivated) answer to the related (antecedent) question in https://math.stackexchange.com/questions/3327016/can-knowledge-of-int-fx2-dx-possibly-be-used-in-obtaining-int-fx-dx, I make the transformation $x\to \frac{4 m^2}{3 \left(m^2-3\right)^2}$, having a jacobian of $\frac{8 m \left(m^2+3\right)}{3 \left(m^2-3\right)^3}$ and new limits of integration (0,1}, whereupon Mathematica directly yields the desired answer of $\frac{7}{9}$.
K B Dave had written:
Let
\begin{equation}
3 x=u^2, 9 x+1=v^2,
\end{equation}
and stereographically project the curve $v^2-3 u^2=1$ about $(u,v) =(0,1)$, so that
\begin{equation}
\frac{v-1}{u}= \frac{3 u}{v+1}=m.
\end{equation}
Then the integrands reduce to rational expression in $m$.,…
| {
"language": "en",
"url": "https://mathoverflow.net/questions/338679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Natural number solutions for equations of the form $\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}$ Consider the equation $$\frac{a^2}{a^2-1} \cdot \frac{b^2}{b^2-1} = \frac{c^2}{c^2-1}.$$
Of course, there are solutions to this like $(a,b,c) = (9,8,6)$.
Is there any known approximation for the number of solutions $(a,b,c)$, when $2 \leq a,b,c \leq k$ for some $k \geq 2.$
More generally, consider the equation $$\frac{a_1^2}{a_1^2-1} \cdot \frac{a_2^2}{a_2^2-1} \cdot \ldots \cdot \frac{a_n^2}{a_n^2-1} = \frac{b_1^2}{b_1^2-1} \cdot \frac{b_2^2}{b_2^2-1}\cdot \ldots \cdot \frac{b_m^2}{b_m^2-1}$$
for some natural numbers $n,m \geq 1$. Similarly to the above question, I ask myself if there is any known approximation to the number of solutions $(a_1,\ldots,a_n,b_1,\ldots,b_m)$, with natural numbers $2 \leq a_1, \ldots, a_n, b_1, \ldots, b_m \leq k$ for some $k \geq 2$. Of course, for $n = m$, all $2n$-tuples are solutions, where $(a_1,\ldots,a_n)$ is just a permutation of $(b_1,\ldots,b_n)$.
| It seems worth noting that the equation in the title does have infinitely many solutions in positive integers, as for all $n$ it is satisfied by $$a={n(n^2-3)\over2},\ b=n^2-1,\ c=n^2-3.$$ The number of solutions of this form with $a\le k$ will be on the order of $\root3\of{2k}$, but Dmitry has found solutions not of this form.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/359481",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Improving the lower bound $I(n^2) > \frac{2(q-1)}{q}$ when $q^k n^2$ is an odd perfect number Let $N = q^k n^2$ be an odd perfect number with special prime $q$ satisfying $q \equiv k \equiv 1 \pmod 4$ and $\gcd(q,n)=1$.
Define the abundancy index
$$I(x)=\frac{\sigma(x)}{x}$$
where $\sigma(x)$ is the classical sum of divisors of $x$.
Since $q$ is prime, we have the bounds
$$\frac{q+1}{q} \leq I(q^k) < \frac{q}{q-1},$$
which implies, since $N$ is perfect, that
$$\frac{2(q-1)}{q} < I(n^2) = \frac{2}{I(q^k)} \leq \frac{2q}{q+1}.$$
We now prove the following claim:
CLAIM: $$I(n^2) > \bigg(\frac{2(q-1)}{q}\bigg)\bigg(\frac{q^{k+1} + 1}{q^{k+1}}\bigg)$$
PROOF: We know that
$$\frac{\sigma(n^2)}{q^k}=\frac{2n^2}{\sigma(q^k)}=\frac{2n^2 - \sigma(n^2)}{\sigma(q^k) - q^k}=\gcd(n^2,\sigma(n^2)),$$
(since $\gcd(q^k,\sigma(q^k))=1$).
However, we have
$$\sigma(q^k) - q^k = 1 + q + \ldots + q^{k-1} = \frac{q^k - 1}{q - 1},$$
so that we obtain
$$\frac{\sigma(n^2)}{q^k}=\frac{\bigg(q - 1\bigg)\bigg(2n^2 - \sigma(n^2)\bigg)}{q^k - 1}=\sigma(n^2) - \bigg(q - 1\bigg)\bigg(2n^2 - \sigma(n^2)\bigg)$$
$$=q\sigma(n^2) - 2(q - 1)n^2.$$
Dividing both sides by $qn^2$, we get
$$I(n^2) - \frac{2(q-1)}{q} = \frac{I(n^2)}{q^{k+1}} > \frac{1}{q^{k+1}}\cdot\frac{2(q-1)}{q},$$
which implies that
$$I(n^2) > \bigg(\frac{2(q-1)}{q}\bigg)\bigg(\frac{q^{k+1} + 1}{q^{k+1}}\bigg).$$
QED.
To illustrate the improved bound:
(1) Unconditionally, we have
$$I(n^2) > \frac{2(q-1)}{q} \geq \frac{8}{5} = 1.6.$$
(2) Under the assumption that $k=1$:
$$I(n^2) > 2\bigg(1 - \frac{1}{q}\bigg)\bigg(1 + \left(\frac{1}{q}\right)^2\bigg) \geq \frac{208}{125} = 1.664.$$
(3) However, it is known that under the assumption $k=1$, we actually have
$$I(q^k) = 1 + \frac{1}{q} \leq \frac{6}{5} \implies I(n^2) = \frac{2}{I(q^k)} \geq \frac{5}{3} = 1.\overline{666}.$$
Here are my questions:
(A) Is it possible to improve further on the unconditional lower bound for $I(n^2)$?
(B) If the answer to Question (A) is YES, my next question is "How"?
I did notice that
$$\frac{2(q-1)}{q}+\frac{2}{q(q+1)}=I(n^2)=\frac{2q}{q+1}$$
when $k=1$.
| Here is a quick way to further refine the improved lower bound for $I(n^2)$:
Write
$$I(n^2)=\frac{2}{I(q^k)}=\frac{2q^k (q - 1)}{q^{k+1} - 1}=\frac{2q^{k+1} (q - 1)}{q(q^{k+1} - 1)}=\bigg(\frac{2(q-1)}{q}\bigg)\bigg(1+\frac{1}{q^{k+1}-1}\bigg).$$
Now use, for instance,
$$q^{k+1} - \frac{1}{q^2} > q^{k+1} - 1$$
to obtain
$$I(n^2)=\bigg(\frac{2(q-1)}{q}\bigg)\bigg(1+\frac{1}{q^{k+1}-1}\bigg)>\bigg(\frac{2(q-1)}{q}\bigg)\bigg(1 + \frac{1}{q^{k+1} - \frac{1}{q^2}}\bigg)=\bigg(\frac{2(q-1)}{q}\bigg)\Bigg(1 + \frac{q^2}{q^{k+3} - 1}\Bigg).$$
Note that
$$\bigg(\frac{2(q-1)}{q}\bigg)\bigg(1 + \frac{q^2}{q^{k+3} - 1}\bigg) - \bigg(\frac{2(q-1)}{q}\bigg)\bigg(1 + \frac{1}{q^{k+1}}\bigg)=\frac{2(q-1)}{q^{k+2} (q^{k+3} - 1)}>0$$
since $q$ is a prime satisfying $q \equiv k \equiv 1 \pmod 4$.
In fact, this method shows that there are infinitely many ways to refine the improved lower bound
$$I(n^2) > \bigg(\frac{2(q-1)}{q}\bigg)\bigg(\frac{q^{k+1}+1}{q^{k+1}}\bigg).$$
It remains to be seen whether there is a refined (improved) lower bound that is independent of $k$ (and therefore expressed entirely in terms of $q$).
| {
"language": "en",
"url": "https://mathoverflow.net/questions/382050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
An infinite series involving harmonic numbers I am looking for a proof of the following claim:
Let $H_n$ be the nth harmonic number. Then,
$$\frac{\pi^2}{12}=\ln^22+\displaystyle\sum_{n=1}^{\infty}\frac{H_n}{n(n+1) \cdot 2^n}$$
The SageMath cell that demonstrates this claim can be found here.
| Denoting $H_0=0$, we have $$\sum_{n=1}^\infty \frac{H_n}{n(n+1)2^n}=\sum_{n=1}^\infty \left(\frac1n-\frac1{n+1}\right)\frac{H_n}{2^n}=\sum_{n=1}^\infty \frac{1}n\left(\frac{H_n}{2^n}-\frac{H_{n-1}}{2^{n-1}}\right)\\=\sum_{n=1}^\infty\frac1{n^22^n}-\sum_{n=1}^\infty\frac{H_n}{(n+1)2^{n+1}}.$$
It is well known that $\sum_{n=1}^\infty\frac1{n^22^n}=\frac{\pi^2}{12}-\frac{\log^2 2}2$ (see the value of ${\rm Li}_2(1/2)$ here). Thus, it remains to show that $$\sum_{n=1}^\infty\frac{H_n}{(n+1)2^n}=\log^2 2.$$
For this, we take the square of the series $$
\log 2=-\log\left(1-\frac12\right)=\frac12+\frac1{2\cdot 2^2}+\frac1{3\cdot 2^3}+\ldots$$
to get $$\log^2 2=\sum_{a,b=1}^\infty \frac1{ab2^{a+b}}=\sum
_{a,b=1}^\infty \frac1{(a+b)2^{a+b}}\left(\frac1a+\frac1b\right)=\sum_{n=2}^\infty\frac1{n2^n}2H_{n-1}=\sum_{n=1}^\infty\frac{H_n}{(n+1)2^{n}}.$$
| {
"language": "en",
"url": "https://mathoverflow.net/questions/402397",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
EM-wave equation in matter from Lagrangian Note
I am not sure if this post is of relevance for this platform, but I already asked the question in Physics Stack Exchange and in Mathematics Stack Exchange without success.
Setup
Let's suppose a homogeneous dielectric with a (spatially) local dielectric response function $\underline{\underline{\epsilon}} (\omega)$ (in general a tensor), such that we have the linear response relation $$\mathbf{D}(\mathbf{x}, \omega) = \underline{\underline{\epsilon}} \left(\omega \right) \mathbf{E}(\mathbf{x}, \omega) \, ,$$ for the displacement field $\mathbf{D}$ in the dielectric.
We can now write down a Lagrangian in Fourier space, describing the EM-field coupling to the dielectric body
$$\mathcal{L}=\frac{1}{2}\left[\mathbf{E}^{*}\left(x,\omega\right) \cdot (\underline{\underline{\epsilon}} \left(\omega \right)-1) \mathbf{E}\left(x,\omega\right)+|\mathbf{E}|^{2}\left(x,\omega\right)-|\mathbf{B}|^{2}\left(x,\omega\right) \right] \, .$$
If we choose a gauge
\begin{align}
\mathbf{E} &= \frac{i \omega}{c} \mathbf{A} \\
\mathbf{B} &= \nabla \times \mathbf{A} \, ,
\end{align}
such that we can write the Lagrangian (suppressing arguments) in terms of the vector potential $\mathbf{A}$ as
$$\mathcal{L} =\frac{1}{2}\left[\frac{\omega^2}{c^2} \mathbf{A}^{*} \cdot (\underline{\underline{\epsilon}}- \mathbb{1}) \mathbf{A}+ \frac{\omega^2}{c^2} |\mathbf{A}|^{2}-|\nabla \times \mathbf{A}|^{2}\right] \, . $$
And consequently we have the physical action
$$
S[\mathbf{A}] = \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} \left(\mathbf{A}\right) \, .
$$
Goal
My goal is to derive the EM-wave equation for the electric field in the dielectric media.
Idea
So my ansatz is the following: If we use Hamilton's principle, we want the first variation of the action to be zero
\begin{align}
0 = \delta S[\mathbf{A}] &= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} S[\mathbf{A} + \varepsilon \mathbf{h}] \right|_{\varepsilon=0} \\
&= \left.\frac{\mathrm{d}}{\mathrm{d} \varepsilon} \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \mathcal{L} (\mathbf{A} + \varepsilon \mathbf{h}) \right|_{\varepsilon=0} \\ &= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \mathbf{A}^* \cdot ({\underline{\underline{\epsilon}}}-\mathbb{1}) \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot ({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{h}^* \cdot \mathbf{A} \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \mathbf{A}^* ) \cdot ( \nabla \times \mathbf{h}) - (\nabla \times \mathbf{h}^* ) \cdot ( \nabla \times \mathbf{A}) \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] \cdot \mathbf{h}^* + \frac{\omega^2}{c^2} \mathbf{A}^* \cdot \mathbf{h} + \frac{\omega^2}{c^2} \mathbf{A} \cdot \mathbf{h}^* \\ &\quad \quad \quad \quad \quad \quad \quad \quad- (\nabla \times \nabla \times \mathbf{A}^* ) \cdot \mathbf{h} - (\nabla \times \nabla \times \mathbf{A} ) \cdot \mathbf{h}^* \Bigg) \\
&= \int d \mathbf{x} \int \frac{d \omega}{2 \pi} \; \frac{1}{2} \Bigg( \underbrace{\left[ \frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h} \\ & \quad \quad \quad \quad \quad \quad \quad \quad \quad \quad + \underbrace{\left[ \frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} \right]}_{\stackrel{!}{=} 0} \cdot \mathbf{h}^* \Bigg) \, ,
\end{align}
for all $\mathbf{h}(\mathbf{x}, \omega)$. And consequently we get the equations
\begin{align}
\frac{\omega^2}{c^2} \left[ ({\underline{\underline{\epsilon}}}^{\dagger}-\mathbb{1}) \mathbf{A} \right]^* + \frac{\omega^2}{c^2} \mathbf{A}^* - \nabla \times \nabla \times \mathbf{A}^* &= 0 \\
\frac{\omega^2}{c^2} \left[({\underline{\underline{\epsilon}}}- \mathbb{1}) \mathbf{A} \right] + \frac{\omega^2}{c^2} \mathbf{A} - \nabla \times \nabla \times \mathbf{A} &= 0 \, .
\end{align}
If we suppose a lossy dielectric body, such that $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$, the equations are in contradiction.
Time-domain
An analogues derivation in the time-domain (which I can post here on request), yields the wave equation (in Fourier space)
$$\frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}-1) \mathbf{A} + \frac{\omega^2}{2 c^2} (\underline{\underline{\epsilon}}^{\dagger} -1) \mathbf{A} + \frac{\omega^2}{c^2} \mathbf{A} - \left( \nabla \times \nabla \times \mathbf{A} \right) = 0 \, .$$
This result is also not resembling the expected result, for $\underline{\underline{\epsilon}}^{\dagger} \neq \underline{\underline{\epsilon}}$.
Question
What went wrong in the calculation?
| It seems to me that your Lagrangian does not reflect the reality of physical phenomena and especially the absorption phenomenon of an EM wave, in the case of the equality of your result, there is no absorption (see, Landau, Lifchitz, tome VIII and also tome V), and all the results derive from the relations of Kramers – Kronig :https://en.wikipedia.org/wiki/Kramers%E2%80%93Kronig_relations
| {
"language": "en",
"url": "https://mathoverflow.net/questions/404380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Prove spectral equivalence of matrices Let $A,D \in \mathbb{R}^{n\times n}$ be two positive definite matrices given by
$$
D =
\begin{bmatrix}
1 & -1 & 0 & 0 & \dots & 0\\
-1 & 2 & -1 & 0 & \dots & 0\\
0 & -1 & 2 & -1 & \dots & 0\\
\vdots & \ddots & \ddots & \ddots & \ddots & 0\\
0 & \dots & 0 & -1 & 2 & -1\\
0 & 0 & \dots & 0 & -1 & 1
\end{bmatrix}, \quad
A =
\begin{bmatrix}
c_{1,2} & -c_{1,2} & 0 & 0 & \dots & 0\\
-c_{2,1} & c_{2,1} + c_{2,3} & -c_{2,3} & 0 & \dots & 0\\
0 & -c_{3,2} & c_{3,2} + c_{3,4} & -c_{3,4} & \dots & 0\\
\vdots & \ddots & \ddots & \ddots & \ddots & 0\\
0 & \dots & 0 & -c_{n-1,n-2} & c_{n-1,n-2} + c_{n-1,n} & -c_{n-1,n}\\
0 & 0 & \dots & 0 & -c_{n,n-1} & c_{n,n-1}
\end{bmatrix}
$$
with $c_{i,j} = c_{j,i} \in (0,c_+]$ for all $i,j=1,\dots,n$ for a $c_+ \in (0,\infty)$.
I would like to prove that independent of the dimension of $n$
$$x^\top A x \le c_+ x^\top D x$$
holds for all $x\in \mathbb{R}^n$.
If this is not the case does there exist a counter example?
This is somehow related to that the norms of the induced scalar products of the matrices $A$ and $D$ are equivalent with factor $c_+$.
| This seems a counterexample: $c_+=1$ (which you can assume without loss of generality), $n=2$, $A = \begin{bmatrix}2 & -\varepsilon \\\ -\varepsilon & 2\end{bmatrix}$, $x = \begin{bmatrix}1 \\ 1\end{bmatrix}$ gives $x^*Dx=2, x^*Ax = 4 - 2\varepsilon$, so the inequality is reversed.
This is not just a wrong sign, since $x = \begin{bmatrix}1 \\ -1\end{bmatrix}$ gives an inequality with the opposite sign. There just does not seem to be an inequality of that kind valid for every $x$.
| {
"language": "en",
"url": "https://mathoverflow.net/questions/425300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the probability that the matrix is orthogonal? Let $X$ and $Y$ be independent $Bin(3,\frac{1}{3})$ random variables. Then what is the probability that the matrix, $$
P=\begin{bmatrix}
\frac{X}{\sqrt2}&\frac{Y}{\sqrt2}\\\frac{-1}{\sqrt2}&\frac{1}{\sqrt2}\end{bmatrix}$$is orthogonal?
My approach:$$PP^T=\begin{bmatrix}
\frac{X}{\sqrt2}&\frac{Y}{\sqrt2}\\\frac{-1}{\sqrt2}&\frac{1}{\sqrt2}\end{bmatrix}\begin{bmatrix}
\frac{X}{\sqrt2}&\frac{-1}{\sqrt2}\\\frac{Y}{\sqrt2}&\frac{1}{\sqrt2}\end{bmatrix}=\begin{bmatrix}
\frac{X^2+Y^2}{2}&\frac{Y-X}{2}\\\frac{Y-X}{2}&{1}\end{bmatrix}=\begin{bmatrix}
1&0\\0&1\end{bmatrix}$$Now,to find the required probability,it is enough to find,$P[Y-X=0].$Is this right?if it is,How can we find that?
| $X=Y$ is not enough. You also need $X^2+Y^2=2$. These two conditions yield $X=Y=1$. Since $\mathbb{P}(X=1)= {{3}\choose{1}}(\frac{1}{3})(1-\frac{1}{3})^2= \frac{4}{9}$ and $X$ and $Y$ are independent, the required probability is $\frac{16}{81}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/219527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is $n < p$ a problem for OLS regression? I realize I can't invert the $X'X$ matrix but I can use gradient descent on the quadratic loss function and get a solution. I can then use those estimates to calculate standard errors and residuals. Am I going to encounter any problems doing this?
| Here is a little specific example to illustrate the issue:
Suppose you want to fit a regression of $y_i$ on $x_i$, $x_i^2$ and a constant, i.e.
$$ y_i = a x_i + b x_i^2 + c + u_i $$
or, in matrix notation,
\begin{align*}
\mathbf{y} =
\begin{pmatrix}
y_1 \\
\vdots \\
y_n
\end{pmatrix}, \quad
\mathbf{X} =
\begin{pmatrix}
1 & x_1 & x_1^2 \\
\vdots & \vdots & \vdots \\
1 & x_n & x_n^2
\end{pmatrix}, \quad
\boldsymbol{\beta} =
\begin{pmatrix}
c \\ a \\ b
\end{pmatrix}, \quad
\mathbf{u} =
\begin{pmatrix}
u_1 \\ \vdots \\ u_n
\end{pmatrix}
\end{align*}
Suppose you observe $\mathbf{y}^T=(0,1)$ and $\mathbf{x}^T=(0,1)$, i.e., $n=2<p=3$.
Then, the OLS estimator is
\begin{align*}
\widehat{\boldsymbol{\beta}} =& \, (\mathbf{X}^T\mathbf{X})^{-1}\mathbf{X}^T\mathbf{y} \\
=& \, \left[
\begin{pmatrix}
1 & 0 & 0 \\
1 & 1 & 1
\end{pmatrix}^T
\begin{pmatrix}
1 & 0 & 0 \\
1 & 1 & 1
\end{pmatrix}
\right]^{-1}
\begin{pmatrix}
1 & 0 & 0 \\
1 & 1 & 1
\end{pmatrix}^T
\begin{pmatrix}
0 \\ 1
\end{pmatrix} \\
=& \, \left[\underbrace{\begin{pmatrix}
2 & 1 & 1 \\
1 & 1 & 1 \\
1 & 1 & 1
\end{pmatrix}}_{\text{not invertible, as $\mathrm{rk}()=2\neq 3$}}
\right]^{-1}
\begin{pmatrix}
1 \\ 1 \\ 1
\end{pmatrix}
\end{align*}
There are infinitely many solutions to the problem: setting up a system of equations and inserting the observations gives
\begin{align*}
0 =& \, a \cdot 0 + b \cdot 0^2 + c \ \ \Rightarrow c=0 \\
1 =& \, a \cdot 1 + b \cdot 1^2 + c \ \ \Rightarrow 1 = a + b
\end{align*}
Hence, all $a=1-b$, $b \in \mathbb{R}$ satisfy the regression equation.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/282663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Proof of Convergence in Distribution with unbounded moment I posted the question here, but no one has provided an answer, so I am hoping I could get an answer here. Thanks very much!
Prove that given $\{X_n\}$ being a sequence of iid r.v's with density $|x|^{-3}$ outside $(-1,1)$, the following is true:
$$
\frac{X_1+X_2 + \dots +X_n}{\sqrt{n\log n}} \xrightarrow{\mathcal{D}}N(0,1).
$$
The original post has a 2 in the square root of the denominator. There should not be a 2.
| This is a proof by c.f. approach:
The c.f. of $X_i$ is
$$
\phi_i(t) = \int_{R}e^{itx}|x|^{-3}\boldsymbol{1}_{x \notin (-1,1)}dx = 2\int_{1}^{\infty}\frac{\cos(tx)}{x^3}dx.
$$
Hence, for $Y_n = (X_1+X_2+\dots+X_n)(\sqrt{n\log n})^{-1}$, we have
\begin{align*}
\phi_{Y_n}(t) =& \phi_i\left(\frac{t}{\sqrt{n\log n}}\right)^n\\
=& \left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n.\\
\end{align*}
We first consider the integral:
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx,
\end{align*}
since for $x \in [1, \sqrt{n\log\log n}]$, ${\displaystyle \frac{tx}{\sqrt{n\log n}}} \to 0$ as $n \to \infty$. Hence, we can apply the Taylor expansion of the cosine term in the first integral around $0$. Then we have
\begin{align*}
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx =& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx} + \left[\frac{t^4x}{24(n\log n)^2 }-\dots\right]dx \\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 + 2\int_{1}^{\sqrt{n\log\log n}}-\frac{t^2}{2n\log nx}dx + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
=& 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n)\\
+& 2\int_{\sqrt{n\log\log n}}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}dx\\
\end{align*}
Now
\begin{align*}
\int_{\sqrt{n\log\log n}}^{\infty}|\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}-\frac{1}{x^3}|dx \leq& \int_{\sqrt{n\log\log n}}^{\infty}\frac{2}{x^3}dx\\
=& \frac{1}{n\log\log n} \in o(1/n).
\end{align*}
Hence,
$$
2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx = 1 -\frac{t^2\log( n\log\log n)}{2n\log n} + o(1/n).
$$
Let $n \to \infty$, we have
$$
\lim_{n \to \infty}\left(2\int_{1}^{\infty}\cos\left(\frac{tx}{\sqrt{n\log n}}\right)\frac{1}{x^3}dx\right)^n = \lim_{n \to \infty}\left(1 -\frac{t^2\log( n\log\log n)}{2n\log n}\right)^n = \lim_{n \to \infty}\left(1-\frac{t^2}{2n}\right)^n = e^{-t^2/2},
$$
which completes the proof.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/376848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Prove that $\frac{1}{n(n-1)}\sum_{i=1}^{n}(X_{i} - \overline{X})^{2}$ is an unbiased estimate of $\text{Var}(\overline{X})$ If $X_{1},X_{2},\ldots,X_{n}$ are independent random variables with common mean $\mu$ and variances $\sigma^{2}_{1},\sigma^{2}_{2},\ldots,\sigma^{2}_{n}$. Prove that
\begin{align*}
\frac{1}{n(n-1)}\sum_{i=1}^{n}(X_{i} - \overline{X})^{2}
\end{align*}
is an unbiased estimate of $\text{Var}[\overline{X}]$.
MY ATTEMPT
To begin with, I tried to approach the problem as it follows
\begin{align*}
\text{Var}(\overline{X}) = \text{Var}\left(\frac{X_{1} + X_{2} + \ldots + X_{n}}{n}\right) = \frac{\sigma^{2}_{1} + \sigma^{2}_{2} + \ldots + \sigma^{2}_{n}}{n^{2}}
\end{align*}
But I do not know how to proceed from here. Any help is appreciated.
| Let's minimize the algebra. We can do this by focusing on the coefficient of $\sigma_1^2.$
First, because all the variables have the same mean,
$$E[X_i - \bar X]=\mu - \mu = 0$$
for all $i,$ implying
$$\operatorname{Var}(X_i - \bar X) = E[(X_i - \bar X)^2] - E[X_i - \bar X]^2 = E[(X_i - \bar X)^2].$$
Since the sum of the $(X_i-\bar X)^2$ is invariant under any permutation of the indexes, let's study the case $i=1$ because it will show us what happens with all the $i.$
We can easily split $X_1 - \bar X$ into independent (and therefore uncorrelated) parts as
$$X_1 - \bar X = X_1 - \frac{1}{n}\left(X_1 + X_2 + \cdots + X_n\right) = \frac{n-1}{n}\left(X_1 - \frac{1}{n-1}\left(X_2 + \cdots + X_n\right)\right).$$
Taking variances immediately gives
$$\operatorname{Var}(X_1 - \bar X) = \left(\frac{n-1}{n}\right)^2 \left(\sigma_1^2 + \frac{1}{(n-1)^2}\left(\sigma_2^2 + \cdots + \sigma_n^2\right)\right).$$
(This is the payoff from observing that the expectation of the square of $X_i - \bar X$ is its variance.)
When summing over all $i$ and ignoring the common factor of $((n-1)/n)^2,$ $\sigma_1^2$ will therefore appear once and it will appear $n-1$ more times with a factor of $1/(n-1)^2,$ for a total coefficient of
$$1 + (n-1)\left(\frac{1}{(n-1)^2}\right) = \frac{n}{n-1}.$$
Consequently every $\sigma_i^2$ appears with this coefficient, whence
$$\eqalign{
E \Bigg[ \frac{1}{n(n-1)}\sum_{i=1}^{n}(X_i - \bar{X})^{2} \Bigg]
&= \frac{1}{n(n-1)} \sum_{i=1}^{n}\operatorname{Var}(X_i - \bar{X}) \\
&= \frac{1}{n(n-1)} \left(\frac{n-1}{n} \right)^2 \left[ \frac{n}{n-1} \sum_{i=1}^n \sigma_i^2 \right] \\
&= \frac{\sigma_1^2 + \sigma_2^2 + \cdots + \sigma_n^2}{n^2},
}$$
QED.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/404174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Bishop derivation completing the square in variational inference I don't understand the derivation on page 467. Bishop says:
Given the optimal factor $q_1^*(z_1)$
\begin{equation}
ln~q_1(z_1) = -\frac{1}{2} z_1^2 \Lambda_{11} + z_1 \mu_1 \Lambda_{11} - z_1 \Lambda_{12}( \mathbb{E}[z_2] - \mu_2) + cst
\end{equation}
using the technique completing the square, we can identify the mean and precision of this gaussian, giving:
\begin{equation}
q^*(z_1) = \mathcal{N}(z_1 \mid m_1, \Lambda_{11}^{-1})
\end{equation}
where
\begin{equation}
m_1 = \mu_1 - \Lambda_{11}^{-1} \Lambda_{12}( \mathbb{E}[z_2] - \mu_2)
\end{equation}
I don't really understand this method completing the square in this situation and how he got $m_1$
| $$
\begin{align}
ln~q_1(z_1) &= -\frac{1}{2} z_1^2 \Lambda_{11} + z_1 \mu_1 \Lambda_{11} - z_1 \Lambda_{12}( \mathbb{E}[z_2] - \mu_2) + const. \\
&= -\frac{1}{2} \Lambda_{11} \left(z_1^2 - 2\Lambda_{11}^{-1}z_1 \mu_1\Lambda_{11} + 2\Lambda_{11}^{-1}z_1\Lambda_{12}( \mathbb{E}[z_2] - \mu_2)\right) + const. \\
&= -\frac{1}{2} \Lambda_{11} \left(z_1^2 - 2z_1(\mu_1 - \Lambda_{11}^{-1}\Lambda_{12}( \mathbb{E}[z_2] - \mu_2))\right) + const. \\
\end{align}
$$
now, let's say
$$
m_1 = \mu_1 - \Lambda_{11}^{-1}\Lambda_{12}( \mathbb{E}[z_2] - \mu_2)
$$
then we can rewrite previous equation in terms of $m_1$
$$
\begin{align}
ln~q_1(z_1) &= -\frac{1}{2} \Lambda_{11} \left(z_1^2 - 2z_1m_1\right) + const. \\
&= -\frac{1}{2} \Lambda_{11} \left(z_1^2 - 2z_1m_1 + m_1^2 - m_1^2\right) + const. \\
&= -\frac{1}{2} \Lambda_{11} \left(z_1^2 - 2z_1m_1 + m_1^2\right) + \frac{1}{2} \Lambda_{11}m_1^2 + const. \\
&= -\frac{1}{2} \Lambda_{11} (z_1 - m_1)^2 + \frac{1}{2} \Lambda_{11}m_1^2 + const. \\
&= -\frac{1}{2} \Lambda_{11} (z_1 - m_1)^2 + const. \\
\end{align}
$$
Now you can see that $const.$ will be just a part of the normalisation constant of the gaussian distribution, $m_1$ is a mean of the gaussian and $\Lambda_{11}$ is a precision
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/447062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
trying to derive expression with polynomial arithmetic I'm trying to figure out how this is derived in "Time Series Analysis - Forecasting and Control" (Box, Jenkins):
$$
a_t = \frac{1}{1-\theta B}z_t = (1 + \theta B + \theta^2 B^2+ ... + + \theta^k B^k)(1-\theta^{k+1} B^{k+1})^{-1}z_t
$$
I can derive this :
$$\frac{1}{1-\theta B} = (1 + \theta B + \theta^2 B^2+ ... ) = (1 + \theta B + \theta^2 B^2 + ... + \theta^kB^k) + \frac{\theta^{k+1}B^{k+1}}{1-\theta B} $$
How does one derive the top expression?
| Here's the way I got through it:
$\frac{1}{1 - \theta B} = (1 + \theta B + \theta^2 B^2 + ...) = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k) + (\theta^{k + 1}B^{k + 1} + \theta^{k + 2}B^{k + 2} + ...)$
$ = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k) + \theta^{k+1}b^{k+1}(1 + \theta B + \theta^2 B^2 + ...)$
So now we have
$(1 + \theta B + \theta^2 B^2 + ...) = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k) + \theta^{k+1}b^{k+1}(1 + \theta B + \theta^2 B^2 + ...)$.
Subtracting gives
$(1 + \theta B + \theta^2 B^2 + ...) - \theta^{k+1}b^{k+1}(1 + \theta B + \theta^2 B^2 + ...) = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k)$
$\Rightarrow (1 - \theta^{k+1}b^{k+1})(1 + \theta B + \theta^2 B^2 + ...) = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k)$
And finally, dividing both sides by $(1 - \theta^{k+1}b^{k+1})$, gives the desired result:
$\frac{1}{1 - \theta B} = (1+ \theta B + \theta ^2B^2 + ... + \theta^kB^k)(1 - \theta^{k+1}b^{k+1})^{-1}$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/449672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Jacobian of Inverse Gaussian Transformation in Schwarz & Samanta (1991) In the sample size $n=2$ case when transforming $\{x_1, x_2\}$ to $\{\bar{x}, s\}$ (where $X_1, X_2 \overset{iid}{\sim} IG(\mu, \lambda)$, $\bar{X}=\frac{\sum_i^2 X_i}{n}$, and $S=\sum_i^2 (\frac{1}{X_i}-\frac{1}{\bar{X}}$), Schwarz and Samanta (1991) write:
I understand the motivation for restricting the transformation to this half-plane, but I'm at a loss for how to find this Jacobian myself. I understand how to find Jacobians for one-to-one transformations, but I'd appreciate a push in the right direction for cases like this.
| For simplicity write $x=x_1$ and $y=x_2,$ so that
$$2\bar x = x+y,\quad s = \frac{1}{x} + \frac{1}{y} - \frac{4}{x+y}.\tag{*}$$
Their Jacobian $J(\bar x, s)$ can be computed by comparing the differential elements in the two coordinate systems
$$\mathrm{d} \bar x \wedge \mathrm{d} s = \frac{1}{2}(\mathrm{d} x + \mathrm{d} y) \wedge \left(-\frac{\mathrm{d} x}{x^2} - \frac{\mathrm{d} y}{y^2} + \frac{4(\mathrm{d} x + \mathrm{d} y)}{(x+y)^2}\right)=\frac{1}{2} \left(\frac{1}{x^2}-\frac{1}{y^2}\right)\mathrm{d} x\wedge \mathrm{d} y$$
(These differential forms and the wedge product are described at Construction of Dirichlet distribution with Gamma distribution.)
It remains only to divide both sides by $(1/x^2 - 1/y^2)/2$ and then re-express the left hand side entirely in terms of $\bar x$ and $s.$ Elementary algebra (see below for details) gives
$$\frac{1}{x^2}-\frac{1}{y^2} = \frac{s^{1/2}(2 + s \bar{x})^{3/2}}{\bar{x}^{3/2}},$$
whence
$$\mathrm{d} x\wedge \mathrm{d} y = \frac{1}{\frac{1}{2} \left(\frac{1}{x^2}-\frac{1}{y^2}\right)}\mathrm{d} \bar{x} \wedge \mathrm{d} s =\frac{2\bar{x}^{3/2}}{\sqrt{s}(2 + s \bar{x})^{3/2}}\,\mathrm{d} \bar x \wedge \mathrm{d} s = J(\bar x, s) \,\mathrm{d} \bar x \wedge \mathrm{d} s.$$
Thus, when converting an integral in terms of $(x,y)$ to one in terms of $(\bar x, s)$, the differential element on the left side must be replaced by the differential element on the right; the absolute value of the factor $J(\bar x, s)$ that is introduced is the Jacobian.
Here is one way to do the algebra. Multiplying $s$ by $2\bar x$ to clear the fraction in $(*)$ gives
$$2\bar{x} s = \frac{(y-x)^2}{xy}.$$
Adding $4$ to that produces
$$2\bar{x} s + 4 = \frac{(x+y)^2}{xy} = \frac{(2\bar x)^2}{xy},$$
which upon dividing both sides by $(2\bar x)^2$ establishes
$$\frac{1}{xy} = \frac{2\bar{x} s + 4}{(2\bar x)^2}.$$
For $y\gt x \gt 0$ we may combine the three preceding results to obtain
$$\left(\frac{1}{x^2} - \frac{1}{y^2}\right)^2 = \frac{(y-x)^2(x+y)^2}{(xy)^4} = (2\bar{x}s)(2\bar{x}s + 4)\left(\frac{2\bar{x} s + 4}{(2\bar x)^2}\right)^2.$$
Collecting identical terms in the fraction and taking square roots gives
$$\frac{1}{x^2} - \frac{1}{y^2} = \left(\frac{s(2\bar{x} s + 4)^3}{(2\bar x)^3}\right)^{1/2}$$
which easily simplifies to the expression in the question.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/450621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is there any way of simplifying the covariance of a function of a random variable and a random variable? For example, if we had random variables $X$ and $Y$ and we know that $corr(X,Y)=\rho$, how would you solve for $Cov(e^X,Y)$?
| The following is given,
$$
\int dxdy \ p(x, y)(x - \mu_x)(y - \mu_y) = \sigma_x\sigma_y\rho,
$$
where $\mu_x(\mu_y)$ and $\sigma_x(\mu_y)$ are the mean and standard deviation of $X(Y)$,
the requested covariance is given by
$$
\begin{aligned}
{\rm Cov}(e^X, Y) &= \int dxdy \ p(x, y)(e^x - \mu_{e^x})(y - \mu_y)\\
&=\int dxdy \ p(x, y)\left(1 + x + \frac{1}{2!}x^2 + \frac{1}{3!}x^3 + \cdots - \mu_{e^x}\right)(y - \mu_y)
\end{aligned}
$$
The mean $\mu_{e^x}$
$$
\begin{aligned}
\mu_{e^x} &= \int dxdy \ p(x, y) e^x\\
&= M_X(t=1)\\
&= 1 + \mu_x + \frac{1}{2!}m_{2,x} + \frac{1}{3!}m_{3,x} + \cdots
\end{aligned}
$$
where $M_X$(t) is the moment generating function (see the definition here), and $m_{n,x}$ is the $n$-th moment of $p_X(x) = \int dy \ p(x, y)$.
We can further simplify the expression as
$$
\begin{aligned}
{\rm Cov}(e^X, Y) &= \int dxdy \ p(x, y)(y - \mu_y)\left(1 + x + \frac{1}{2!}x^2 + \cdots - 1 - \mu_x - \frac{1}{2!}m_{2,x} - \cdots\right)\\
&= \sigma_x\sigma_y\rho + \sum_{n=2}^{\infty}\frac{1}{n!}\int dxdy \ p(x, y)(y - \mu_y)(x^n - m_{n,x}).
\end{aligned}
$$
Therefore we would need a lot more information to obtain the desired correlation.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/570616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
PDF of $Z=X^2 + Y^2$ where $X,Y\sim N(0,\sigma)$ Using normal distribution probablilty density function(pdf),
\begin{align}
f_Y(x) = f_X(X) &= \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{x^2}{2\sigma^2}} \\
\end{align}
Taking $Z' = X^2 = Y^2$, the corresponding pdf is,
\begin{align}
f_{Z'}(z) &= f_x(x) \left| \frac{\delta x}{\delta z}\right| = \frac{1}{\sigma\sqrt{2 \pi}} e^{- \frac{z}{2 \sigma^2}} \left( \frac{1}{2\sqrt{z}}\right) \\
\end{align}
Since $X$ and $Y$ are independent, so $Z$ pdf can be calculated using the following
\begin{align}
f_z(z) = \int^{\infty}_{-\infty}f_{X^2}(x)f_{Y^2}(z-x)dx
\end{align}
Is this approach correct?
| Following the comment by whuber, for problems like this that involve convolutions of IID random variables, it is generally simpler to work with the characteristic function than with the density function. Using the law of the unconscious statistician we can obtain the characteristic function for $X^2$ as follows:
$$\begin{align}
\phi_{X^2}(t)
&\equiv \mathbb{E}(\exp(itX^2)) \\[16pt]
&= \int \limits_{-\infty}^\infty \exp(it x^2) \cdot \text{N}(x|0, \sigma^2) \ dx \\[6pt]
&= \int \limits_{-\infty}^\infty \exp(it x^2) \cdot \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \exp \bigg( -\frac{1}{2 \sigma^2} \cdot x^2 \bigg) \ dx \\[6pt]
&= \int \limits_{-\infty}^\infty \frac{1}{\sqrt{2 \pi \sigma^2}} \cdot \exp \bigg( -\frac{1-2it \sigma^2}{2 \sigma^2} \cdot x^2 \bigg) \ dx \\[6pt]
&= \frac{1}{\sqrt{1-2it \sigma^2}} \int \limits_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \sqrt{\frac{1-2it \sigma^2}{\sigma^2}} \cdot \exp \bigg( -\frac{1-2it \sigma^2}{2 \sigma^2} \cdot x^2 \bigg) \ dx \\[6pt]
&= \frac{1}{\sqrt{1-2it \sigma^2}} \int \limits_{-\infty}^\infty \text{N}\bigg( x \bigg| 0, \frac{\sigma^2}{1-2it \sigma^2} \bigg) \ dx \\[6pt]
&= \frac{1}{\sqrt{1-2it \sigma^2}}. \\[6pt]
\end{align}$$
(And of course, we have $\phi_{X^2}(t) = \phi_{Y^2}(t)$ in this case so the latter characteristic function is the same.) We then have:
$$\begin{align}
\phi_Z(t)
&\equiv \mathbb{E}(\exp(itZ)) \\[16pt]
&= \mathbb{E}(\exp(itX^2 + itY^2)) \\[16pt]
&= \mathbb{E}(\exp(itX^2)) \cdot \mathbb{E}(\exp(itY^2)) \\[12pt]
&= \frac{1}{\sqrt{1-2it \sigma^2}} \cdot \frac{1}{\sqrt{1-2it \sigma^2}} \\[6pt]
&= \frac{1}{1-2it \sigma^2}. \\[6pt]
\end{align}$$
This is the characteristic function for the scaled chi-squared distribution with two degrees-of-freedom. Using the fact that the characteristic function is a unique representative of the distribution, you then have:
$$Z \sim \sigma^2 \cdot \text{ChiSq}(\text{df} = 2).$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/588820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Distance between the product of marginal distributions and the joint distribution Given a joint distribution $P(A,B,C)$, we can compute various marginal distributions. Now suppose:
\begin{align}
P1(A,B,C) &= P(A) P(B) P(C) \\
P2(A,B,C) &= P(A,B) P(C) \\
P3(A,B,C) &= P(A,B,C)
\end{align}
Is it true that $d(P1,P3) \geq d(P2,P3)$ where d is the total variation distance?
In other words, is it provable that $P(A,B) P(C)$ is a better approximation of $P(A,B,C)$ than $P(A) P(B) P(C)$ in terms of the total variation distance? Intuitively I think it's true but could not find out a proof.
| I just find the following counter-example. Suppose $A,B,C$ are discrete variables. $A,B$ can each take two values while $C$ can take three values.
The joint distribution $P(A,B,C)$ is:
\begin{array}{cccc}
A & B & C & P(A,B,C) \\
1 & 1 & 1 & 0.1/3 \\
1 & 1 & 2 & 0.25/3 \\
1 & 1 & 3 & 0.25/3 \\
1 & 2 & 1 & 0.4/3 \\
1 & 2 & 2 & 0.25/3 \\
1 & 2 & 3 & 0.25/3 \\
2 & 1 & 1 & 0.4/3 \\
2 & 1 & 2 & 0.25/3 \\
2 & 1 & 3 & 0.25/3 \\
2 & 2 & 1 & 0.1/3 \\
2 & 2 & 2 & 0.25/3 \\
2 & 2 & 3 & 0.25/3 \\
\end{array}
So the marginal distribution $P(A,B)$ is:
\begin{array}{ccc}
A & B & P(A,B) \\
1 & 1 & 0.2 \\
1 & 2 & 0.3 \\
2 & 1 & 0.3 \\
2 & 2 & 0.2 \\
\end{array}
The marginal distributions $P(A), P(B)$ and $P(C)$ are uniform.
So we can compute that:
\begin{align}
d(P1,P3) &= 0.1 \\
d(P2,P3) &= 0.4/3
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/65608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Probability of getting 4 Aces What is the probability of drawing 4 aces from a standard deck of 52 cards. Is it:
$$
\frac{1}{52} \times \frac{1}{51} \times \frac{1}{50} \times \frac{1}{49} \times 4!
$$
or do I simply say it is:
$$
\frac{4}{52} = \frac{1}{13}
$$
| The first answer you provided ($\frac{1}{52}\times \frac{1}{51}\times \frac{1}{50}\times \frac{1}{49}\times 4!$) is correct.
If we draw four cards from 52 cards, then the total possible outcomes are $C_4^{52} 4!$.
The number of outcomes that have four aces in a row is $4!$
Thus the probability of drawing 4 aces from a standard deck of 52 cards is
$$
\frac{4!}{C_4^{52} 4!} = \frac{1}{C_4^{52}} = \frac{4!}{52\times 51\times 50\times 49}
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/115061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Picking socks probability proof I'm trying to follow the proof from question 1b out of the book "50 Challenging Problems in Probability" by Mosteller. The problem states:
A drawer contains red and black socks. When two socks are drawn at random, the probability that both are red is $1/2$. How small can the number of socks in the drawer be if the number of black socks is even?
The answer given starts as follows:
Let there be $r$ red and $b$ black socks. The probability of the first socks' being red is $\frac{r}{r+b}$ and if the first sock is red, the probability of the second's being red now that a red has been removed is $\frac{r-1}{r+b-1}$. Then we require the probability that both are red to be $\frac{1}{2}$, or $$\frac{r}{r+b} \times \frac{r-1}{r+b-1} = \frac{1}{2}$$
Notice that
$$\frac{r}{r+b} > \frac{r-1}{r+b-1} \quad \text{for b} >0$$
Therefore we can create the inequalities
$$\left(\frac{r}{r+b}\right)^2 > \frac{1}{2} > \left(\frac{r-1}{r+b-1}\right)^2$$
This is where I'm confused. Why is it that $\left(\frac{r}{r+b}\right)^2 > \frac{1}{2}$? If $r=1, b=100$ then obviously $\left(\frac{1}{101}\right)^2 < \frac{1}{2}$. Am I missing some obvious assumption?
| An explicit way to see the inequality is as follows. Start with the inequality between the multiplied fractions
$$\frac{r}{r+b}>\frac{r-1}{r+b-1}$$
Multiply both sides by $\frac{r}{r+b}$:
$$\left( \frac{r}{r+b}\right)^2>\frac{r}{r+b}\times\frac{r-1}{r+b-1}$$
The right hand side of the inequality is just $\frac{1}{2}$ from the initial probability equation. So
$$\left( \frac{r}{r+b}\right)^2>\frac{1}{2}$$
Similarly, if you multiplied the original inequality by $\frac{r-1}{r+b-1}$, you would get:
$$\frac{r}{r+b}\times\frac{r-1}{r+b-1}=\frac{1}{2}>\left(\frac{r-1}{r+b-1}\right)^2$$
Combine the two inequalities so obtained to conclude
$$\left( \frac{r}{r+b}\right)^2>\frac{1}{2}>\left(\frac{r-1}{r+b-1}\right)^2$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/104482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
$E(X)E(1/X) \leq (a + b)^2 / 4ab$ I've worked on the following problem and have a solution (included below), but I would like to know if there are any other solutions to this problem, especially more elegant solutions that apply well known inequalities.
QUESTION: Suppose we have a random variable s.t. $P(a<X<b) =1$ where 0 < a < X < b , a and b both positive constants.
Show that $E(X)E(\frac{1}{X}) \le \frac{(a+b)^2}{4ab}$
Hint: find constant c and d s.t. $\frac{1}{x} \le cx+d$ when $a<x<b$, and argue that then we shall have $E(\frac{1}{X}) \le cE(X)+d$
MY SOLUTION: For a line $cx+d$ that cuts through $\frac{1}{X}$ at the points x=a and x = b, it's easy to show that $ c = - \frac{1}{ab} $ and $d = \frac{a+b}{ab} $,
So, $ \frac{1}{X} \le - \frac{1}{ab} X + \frac{a+b}{ab} $, and therefore:
$$ E(\frac{1}{X}) \le - \frac{1}{ab} E(X) + \frac{a+b}{ab} $$
$$ abE(\frac{1}{X}) + E(X) \le (a+b) $$
Now, because both sides of the inequality are positive, it follows that:
$$ [abE(\frac{1}{X}) + E(X)]^2 \le (a+b)^2 $$
$$ (ab)^2E(\frac{1}{X})^2 + 2abE(\frac{1}{X})E(X) + E(X)^2 \le (a+b)^2 $$
Then, for the LHS, we can see that
$2abE(\frac{1}{X})E(X) \le (ab)^2E(\frac{1}{X})^2 + E(X)^2$
because
$0 \le (ab)^2E(\frac{1}{X})^2 - 2abE(\frac{1}{X})*E(X) + E(X)^2 = [abE(\frac{1}{X}) - E(X)]^2 $
SO,
$$ 4abE(\frac{1}{X})E(X) \le (ab)^2E(\frac{1}{X})^2 + 2abE(\frac{1}{X})E(X) + E(X)^2 \le (a+b)^2 $$
and therefore:
$$ E(\frac{1}{X})E(X) \le \frac{(a+b)^2}{4ab} $$ Q.E.D.
Thanks for any additional solutions you might be able to provide. Cheers!
| I know it's stated in the problem, but I figured I'd put it in the answer bank:
For some line $cx+d$ that cuts through $\frac{1}{X}$ at the points x=a and x = b, it's easy to show that $ c = - \frac{1}{ab} $ and $d = \frac{a+b}{ab} $,
So, $ \frac{1}{X} \le - \frac{1}{ab} X + \frac{a+b}{ab} $, and therefore:
$$ E(\frac{1}{X}) \le - \frac{1}{ab} E(X) + \frac{a+b}{ab} $$
$$ abE(\frac{1}{X}) + E(X) \le (a+b) $$
Now, because both sides of the inequality are positive, it follows that:
$$ [abE(\frac{1}{X}) + E(X)]^2 \le (a+b)^2 $$
$$ (ab)^2E(\frac{1}{X})^2 + 2abE(\frac{1}{X})E(X) + E(X)^2 \le (a+b)^2 $$
Then, for the LHS, we can see that
$2abE(\frac{1}{X})E(X) \le (ab)^2E(\frac{1}{X})^2 + E(X)^2$
because
$0 \le (ab)^2E(\frac{1}{X})^2 - 2abE(\frac{1}{X})*E(X) + E(X)^2 = [abE(\frac{1}{X}) - E(X)]^2 $
SO,
$$ 4abE(\frac{1}{X})E(X) \le (ab)^2E(\frac{1}{X})^2 + 2abE(\frac{1}{X})E(X) + E(X)^2 \le (a+b)^2 $$
and therefore:
$$ E(\frac{1}{X})E(X) \le \frac{(a+b)^2}{4ab} $$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/141766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Sum of binomial coefficients with increasing $n$ Is there any formula for calculating $$\binom{n}{1} + \binom{n+1}{2} + \binom{n+2}{3} + ... + \binom{n+m}{m+1}$$
I have tried iterative method but is there any constant time method exist.
| $\binom{n+1}{2} = \binom{n}{1} + \binom{n}{2} = \binom{n}{1} + \frac{n-1}{2} \binom{n}{1}$
$\binom{n+2}{3} = \binom{n+1}{2} + \binom{n+1}{3} = \binom{n+1}{2} + \frac{n-1}{2} \binom{n+1}{2}$
$\binom{n+3}{4} = \binom{n+2}{3} + \binom{n+2}{4} = \binom{n+2}{3} + \frac{n-1}{3} \binom{n+2}{3}$
...
...
$\binom{n+m}{m} = \binom{n+m-1}{m-1} + \binom{n+m-2}{m} = \binom{n+m-1}{m-1} + \frac{n-1}{m-1}\binom{n+m-1}{m-1} $
Start with $n$ and store each value starting from $\binom{n+1}{2}$ to calculate $\binom{n+2}{3}$ [O(1) operation] and so on.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/144765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Expectation of joint probability mass function Let the joint probabilty mass function of discrete random variables X and Y be given by
$f(x,y)=\frac{x^2+y^2}{25}$, for $(x,y) = (1,1), (1,3), (2,3)$
The value of E(Y) is ?
Attempt
$E(Y) = \sum_{x,y} y\cdot\frac{x^2 + y^2}{25}$
$E(Y) = \sum_{x,y}\frac{x^2y + y^3}{25}$
Substituting for $(x,y) = (1,1), (1,3), (2,3)$
$E(Y) = \frac1{25} + \frac{30}{25} + \frac{39}{25}$
$E(Y) = 2.80$
Is this right?
| \begin{align}
\mathbb E[Y] &= \sum_y y\cdot\mathbb P(Y=y)\\
&= 1\cdot\mathbb P(Y=1) + 3\cdot\mathbb P(Y=3)\\
&= \frac{1^2+1^2}{25} + 3\left(\frac{1^2+3^2}{25}+\frac{2^2+3^2}{25} \right)\\
&= \frac{71}{25}.
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/239111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What is the name of this random variable? Let $X \sim \text{Normal}(\mu, \sigma^2)$. Define $Y = \frac{e^X -1}{e^X+1}$. The inverse transformation is $X = \text{logit}\left(\frac{1+Y}{2}\right) = \log\left(\frac{1+Y}{1-Y} \right)$. By the transformation theorem
$$
f_Y(y) = f_X\left[ \log\left(\frac{1+y}{1-y}\right) \right]\times\frac{2}{(1-y)(1+y)}.
$$
Does this distribution have a name that I can look up? I have to evaluate this density pretty often when I use random walk Metropolis-Hastings and I sample for parameters $-1 < Y < 1$ (e.g. correlation parameters, AR(1) parameters, etc.) by transforming them into $X$ first, and then adding Gaussian noise to them.
| Thanks to @whuber's comment, we know if $X \sim \text{Normal}(\mu, \sigma^2)$, then $Z = e^x/(1+e^x)$ follows a $\text{Logit-Normal}(\mu, \sigma)$ and has density
$$
f_Z(z) = \frac{1}{\sigma\sqrt{2\pi}}\frac{1}{z(1-z)}\exp\left[-\frac{(\text{logit}(z) - \mu)^2}{2\sigma^2}\right].
$$
Then $Y = \frac{e^X -1}{e^X+1} = 2\left(\frac{e^X}{1+e^X}\right)-1 = 2Z-1$ is just a scaled and shifted logit-normal random variable with density
\begin{align*}
f_Y(y) &= f_Z\left(\frac{y+1}{2}\right)\times \frac{1}{2} \\
&= \frac{1}{\sigma\sqrt{2\pi}}\frac{2}{(1+y)(1-y)}\exp\left[-\frac{\left\{\log\left(\frac{1+y}{1-y}\right) - \mu\right\}^2}{2\sigma^2}\right]\\
&= f_X\left[ \log\left(\frac{1+y}{1-y}\right) \right]\times\frac{2}{(1-y)(1+y)}.
\end{align*}
Not part of the same family, but still good to know.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/321905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show that, for $t>1$, $P[\frac{Y}{Z}\leq t]=\frac{t-1}{t+1}$ Let the distribution of $X$ be $U(0,1)$. Let U be the length of the shorter of the intervals $(0,X)$ and $(X,1)$; that is, $Z=min(X,1-X)$ and let $Y=1-Z$ be the length of the larger part. Show that, for $t>1$, $P[\frac{Y}{Z}\leq t]=\frac{t-1}{t+1}$;Find the PDF of $Y/Z$.
I am unable to find the distribution function.I can find PDF of $Y/Z$. Please help me to find out $P[\frac{Y}{Z}\leq t]=\frac{t-1}{t+1}$
| First, let's determine the behaviour of $\frac{Y}{Z}$ given $X$. We have $\frac{Y}{Z} = \frac{1-X}{X} = \frac{1}{X} - 1$ for $X \in (0,\frac{1}{2}]$ and $\frac{Y}{Z} = \frac{X}{1-X} = \frac{1}{1-X} -1$ for $X \in [\frac{1}{2}, 1)$
Now the question is, when $\frac{Y}{Z} \leq t$? For $X \in (0, \frac{1}{2}]$ it's equivalent to $\frac{1}{X} - 1 \leq t$ or $X \geq \frac{1}{t+1}$. For $X \in [\frac{1}{2}, 1)$ it is equivalent to $\frac{1}{1-X} - 1 \leq t$, which is equivalent to $1-X \geq \frac{1}{t+1}$ or $X \leq \frac{t}{t+1}$.
That means that $\frac{Y}{Z} \leq t $ if and only if $X \in [\frac{1}{t+1} , \frac{t}{t+1}]$. Since $X$ is distributed from $(0,1)$ with uniform probability, the probability it will be in desired range is equal to the length of this range, and hence we have the desired $P[\frac{Y}{Z} \leq t] = \frac{t-1}{t+1}$.
Now to get the PDF when you have the probability function, you just need to differentiate the probability over $t$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/33656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How do you express the quadratic equation in terms of mu and variance? The Wikipedia article for the Normal distribution says that it's more common to express the quadratic function $f(x) = ax^2+bx+c$ in terms of $\mu$ and $\sigma^2$ what is the transformation that I need to do to eventually solve $\mu = -\frac{b}{2a}$ and $\sigma^2 = -\frac{1}{2a}$
I know I can equate $ax^2 + bx +c =\frac{-(x-\mu)^2}{2\sigma^2}$, but I don't see how the right term is derived.
| Complete the square:
$$
ax^2+bx+c=a\left(x^2+\frac{b}{a}x\right)+c=a\left(x^2+2\frac{b}{2a}x +\frac{b^2}{4a^2}\right) - \frac{b^2}{4a} + c
$$
$$
= a\left(x+\frac{b}{2a}\right)^2 -\frac{b^2}{4a} + c \, .
$$
Hence,
$$
e^{ax^2+bx+c} = A\,e^{a\left(x+\frac{b}{2a}\right)^2} \, ,
$$
in which $A=e^{- \frac{b^2}{4a} + c}$, and this is a normal density of the form
$$
\frac{1}{\sqrt{2\pi}\sigma}e^{-\frac{1}{2\sigma^2}\left(x-\mu \right)^2}
$$
if and only if $\mu = -\frac{b}{2a}$, $-\frac{1}{2\sigma^2}=a$, and $\frac{1}{\sqrt{2\pi}\sigma} = A$.
You end with just two parameters, $\mu$ and $\sigma$, instead of three, because you need a constraint that guarantees that $e^{ax^2+bx+c}$ integrates to one (it is a density). In other words, you already know that $c$ is determined from $a$ and $b$ by $c=-\log\left(\int_{-\infty}^\infty e^{ax^2+bx}\, dx\right)$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/39163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
conditional expectation of squared standard normal Let $A,B$ independent standard normals. What is $E(A^2|A+B)$?
Is the following ok?
$A,B$ iid and hence $(A^2,A+B),(B^2,A+B)$ iid.
Therefore we have $\int_M A^2 dP = \int_M B^2 dP$ for every $A+B$-measurable set $M$ and hence $E(A^2|A+B) = E(B^2|A+B)$.
We obtain $2 \cdot E(A^2|A+B) = E(A^2|A+B) + E(B^2|A+B) = E(A^2+B^2|A+B) = A^2+B^2$ where the last equation holds since $A^2+B^2$ is $A+B$-measurable.
Finally we have $E(A^2|A+B) = \frac{A^2+B^2}{2}$.
| There are similar questions on CV, but I haven't seen anyone give the details on the distribution of $A,$ so here goes.
Let $Z = A+B.$ Following the logic given by Xi'an here: Simulation involving conditioning on sum of random variables , the pdf for $A| Z$ is
$$ f_{A|Z}(a|z)=\frac{f_B(z-a) f_A(a)}{f_Z(z)}=\frac{\frac{1}{\sqrt{2 \pi}}e^{-\left(z-a \right)^2 \over 2 } \frac{1}{\sqrt{2 \pi}}e^{-a^2 \over 2}}{\frac{1}{\sqrt {4 \pi}} e^{{-z^2 \over 4 }}} $$
This simplifies to $$ f_{A|Z}(a|z) = \frac{1}{\sqrt{\pi}} \ e^{- \left( a-\frac{z}{2} \right)^2} $$
If we let $w=\frac{1}{\sqrt{2}}$ then we can write it as
$$ f_{A|Z}(a|z) = \frac{1}{\sqrt{2 \pi w^2}} \ e^{- \left( a-\frac{z}{2} \right)^2 \over 2w^2} \ $$
Now we can see the conditional distribution is normal with mean $z \over 2$ and variance $\frac{1}{2}.$
So to answer the question, since for any random variable $X$ with finite variance we know that $E[X^2]=\mu^2+\sigma^2,$ we have
$$E[A^2|A+B=z]=\left( \frac{z}{2} \right)^2 + \frac{1}{2}= \frac{z^2+2}{4} = \frac{\left( A + B \right)^2+2}{4} $$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/79842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove the equivalence of the following two formulas for Spearman correlation From wikipedia, Spearman's rank correlation is calculated by converting variables $X_i$ and $Y_i$ into ranked variables $x_i$ and $y_i$, and then calculating Pearson's correlation between the ranked variables:
However, the article goes on to state that if there are no ties amongst the variables $X_i$ and $Y_i$, the above formula is equivalent to
where $d_i = y_i - x_i$, the difference in ranks.
Can someone give a proof of this please? I don't have access to the textbooks referenced by the wikipedia article.
| $ \rho = \frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sqrt{\sum_i (x_i-\bar{x})^2 \sum_i(y_i-\bar{y})^2}}$
Since there are no ties, the $x$'s and $y$'s both consist of the integers from $1$ to $n$ inclusive.
Hence we can rewrite the denominator:
$\frac{\sum_i(x_i-\bar{x})(y_i-\bar{y})}{\sum_i (x_i-\bar{x})^2}$
But the denominator is just a function of $n$:
$\sum_i (x_i-\bar{x})^2 = \sum_i x_i^2 - n\bar{x}^2 \\
\quad= \frac{n(n + 1)(2n + 1)}{6} - n(\frac{(n + 1)}{2})^2\\
\quad= n(n + 1)(\frac{(2n + 1)}{6} - \frac{(n + 1)}{4})\\
\quad= n(n + 1)(\frac{(8n + 4-6n-6)}{24})\\
\quad= n(n + 1)(\frac{(n -1)}{12})\\
\quad= \frac{n(n^2 - 1)}{12}$
Now let's look at the numerator:
$\sum_i(x_i-\bar{x})(y_i-\bar{y})\\
\quad=\sum_i x_i(y_i-\bar{y})-\sum_i\bar{x}(y_i-\bar{y}) \\
\quad=\sum_i x_i y_i-\bar{y}\sum_i x_i-\bar{x}\sum_iy_i+n\bar{x}\bar{y} \\
\quad=\sum_i x_i y_i-n\bar{x}\bar{y} \\
\quad= \sum_i x_i y_i-n(\frac{n+1}{2})^2 \\
\quad= \sum_i x_i y_i- \frac{n(n+1)}{12}3(n +1) \\
\quad= \frac{n(n+1)}{12}.(-3(n +1))+\sum_i x_i y_i \\
\quad= \frac{n(n+1)}{12}.[(n-1) - (4n+2)] + \sum_i x_i y_i \\
\quad= \frac{n(n+1)(n-1)}{12} - n(n+1)(2n+1)/6 + \sum_i x_i y_i \\
\quad= \frac{n(n+1)(n-1)}{12} -\sum_i x_i^2+ \sum_i x_i y_i \\
\quad= \frac{n(n+1)(n-1)}{12} -\sum_i (x_i^2+ y_i^2)/2+ \sum_i x_i y_i \\
\quad= \frac{n(n+1)(n-1)}{12} - \sum_i (x_i^2 - 2x_i y_i + y_i^2) /2\\
\quad= \frac{n(n+1)(n-1)}{12} - \sum_i(x_i - y_i)^2/2\\
\quad= \frac{n(n^2-1)}{12} - \sum d_i^2/2$
Numerator/Denominator
$= \frac{n(n+1)(n-1)/12 - \sum d_i^2/2}{n(n^2 - 1)/12}\\
\quad= {\frac {n(n^2 - 1)/12 -\sum d_i^2/2}{n(n^2 - 1)/12}}\\
\quad= 1- {\frac {6 \sum d_i^2}{n(n^2 - 1)}}\,$.
Hence
$ \rho = 1- {\frac {6 \sum d_i^2}{n(n^2 - 1)}}.$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/89121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 2
} |
Expectation of joint probability mass function Let the joint probabilty mass function of discrete random variables X and Y be given by
$f(x,y)=\frac{x^2+y^2}{25}$, for $(x,y) = (1,1), (1,3), (2,3)$
The value of E(Y) is ?
Attempt
$E(Y) = \sum_{x,y} y\cdot\frac{x^2 + y^2}{25}$
$E(Y) = \sum_{x,y}\frac{x^2y + y^3}{25}$
Substituting for $(x,y) = (1,1), (1,3), (2,3)$
$E(Y) = \frac1{25} + \frac{30}{25} + \frac{39}{25}$
$E(Y) = 2.80$
Is this right?
| In relation to the (1,1) point you seem to be claiming that $1\cdot \frac{1^2+1^2}{25}=\frac{1}{25}$.
It's more usually thought to be the case that $1^2+1^2>1$*. This seems to be the cause of your problem here.
* (Some people - reckless people perhaps - even claim that $1^2+1^2$ could be as much as $2$.)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/239111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$P(|X-Y|>\frac{1}{3}/X\geq1/2)=?$
Let X and Y are two independent uniformly distributed random variable on [0,1].The value of $P(|X-Y|>\frac{1}{3}/X\geq1/2)=?$
My try
$P(|X-Y|>\frac{1}{3}/X\geq1/2)=\frac{P(|X-Y|>\frac{1}{3},X\geq1/2)}{P(X\geq1/2)}$
Here I solved the denominator part that is $P(X\geq1/2)$,But the numerator part $P(|X-Y|>\frac{1}{3},X\geq1/2)$ I am not getting any idea how to start?
| Hint: See the following image. The area of the shaded parts is your answer:
$$P(|X-Y|>\frac{1}{3},X\geq1/2) = \left(\frac{1}{2}\times (\frac{2}{3} - \frac{1}{2})\times (1-\frac{1}{2}-\frac{1}{3})\right) + \left(\frac{1}{2}\times((\frac{1}{2}-\frac{1}{3})+(1-\frac{1}{3}))\times(1-\frac{1}{2})\right)=\frac{2}{9}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/321329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding MLE and MSE of $\theta$ where $f_X(x\mid\theta)=\theta x^{−2} I_{x\geq\theta}(x)$
Consider i.i.d random variables $X_1$, $X_2$, . . . , $X_n$ having pdf
$$f_X(x\mid\theta) = \begin{cases} \theta x^{−2} & x\geq\theta \\ 0 &
x\lt\theta \end{cases}$$
where $\theta \gt 0$ and $n\geq 3$
(a) Give the likelihood function, expressing it clearly as a function
of $\theta$
(b) Give the MSE of the MLE of $\theta$
My Attempt:
(a) $$L(\theta\mid\vec{x}) = \begin{cases} \theta^n\left(\prod_{i=1}^n x_i\right)^{−2} & x_{(1)}\geq\theta \\ 0 &
x_{(1)}\lt\theta \end{cases}$$
(b) Clearly the MLE of $\theta$ is $X_{(1)}$. We have
$$\begin{align*}
F_{X_{(1)}}(x)
&=\mathsf P(\text{min}{\{X_1,...,X_n}\}\leq x)\\\\
&=1-\mathsf P(\text{min}{\{X_1,...,X_n}\}\gt x)\\\\
&=1-\left(1-F_X(x)\right)^n\\\\
&=1-\left(\frac{\theta}{x}\right)^n
\end{align*}$$
so
$$f_{X_{(1)}}(x)=\left(1-\left(\frac{\theta}{x}\right)^n\right)'=\frac{n\theta^n}{x^{n+1}}I_{[\theta,\infty)}(x)$$
It follows that
$$\begin{align*}
\mathsf E\left(X_{(1)}^2\right)
&=\int_{\theta}^{\infty}\frac{n\theta^n}{x^{n-1}}dx\\\\
&=n\theta^n\left(\frac{x^{-n+2}}{-n+2}\Biggr{|}_{\theta}^{\infty}\right)\\\\
&=\frac{n\theta^2}{n-2}
\end{align*}$$
and
$$\begin{align*}
\mathsf E\left(X_{(1)}\right)
&=\int_{\theta}^{\infty}\frac{n\theta^n}{x^{n}}dx\\\\
&=n\theta^n\left(\frac{x^{-n+1}}{-n+1}\Biggr{|}_{\theta}^{\infty}\right)\\\\
&=\frac{n\theta}{n-1}
\end{align*}$$
so
$$\begin{align*}
\mathsf{Var}\left(X_{(1)}\right)
&=\frac{n\theta^2}{n-2}-\left(\frac{n\theta}{n-1}\right)^2\\\\
&=\theta^2\left(\frac{n}{n-2}-\frac{n^2}{(n-1)^2}\right)
\end{align*}$$
We also have
$$\begin{align*}
\text{bias}^2\left(\hat{\theta}\right)
&=\left(\mathsf E\left(\hat{\theta}\right)-\theta\right)^2\\\\
&=\left(\frac{n\theta}{n-1}-\theta\right)^2\\\\
&=\left(\theta\left(\frac{n}{n-1}-1\right)\right)^2
\end{align*}$$
Finally the MSE is given by
$$\begin{align*}
\mathsf{Var}\left(X_{(1)}\right)+\text{bias}^2\left(\hat{\theta}\right)
&=\theta^2\left(\frac{n}{n-2}-\frac{n^2}{(n-1)^2}\right)+\left(\theta\left(\frac{n}{n-1}-1\right)\right)^2\\\\
&=\theta^2\left(\frac{n}{n-2}-\frac{n^2}{(n-1)^2}+\left(\frac{n}{n-1}-1\right)^2\right)\\\\
&=\frac{2\theta^2}{(n-1)(n-2)}
\end{align*}$$
Are these valid solutions?
| This question is now old enough to give a full succinct solution confirming your calculations. Using standard notation for order statistics, the likelihood function here is:
$$\begin{aligned}
L_\mathbf{x}(\theta)
&= \prod_{i=1}^n f_X(x_i|\theta) \\[6pt]
&= \prod_{i=1}^n \frac{\theta}{x_i^2} \cdot \mathbb{I}(x_i \geqslant) \\[6pt]
&\propto \prod_{i=1}^n \theta \cdot \mathbb{I}(x_i \geqslant \theta) \\[12pt]
&= \theta^n \cdot \mathbb{I}(0 < \theta \leqslant x_{(1)}). \\[6pt]
\end{aligned}$$
This function is strictly increasing over the range $0 < \theta \leqslant x_{(1)}$ so the MLE is:
$$\hat{\theta} = x_{(1)}.$$
Mean-squared-error of MLE: Rather than deriving the distribution of the estimator, it is quicker in this case to derive the distribution of the estimation error. Define the estimation error as $T \equiv \hat{\theta} - \theta$ and note that it has distribution function:
$$\begin{aligned}
F_T(t) \equiv \mathbb{P}(\hat{\theta} - \theta \leqslant t)
&= 1-\mathbb{P}(\hat{\theta} > \theta + t) \\[6pt]
&= 1-\prod_{i=1}^n \mathbb{P}(X_i > \theta + t) \\[6pt]
&= 1-(1-F_X(\theta + t))^n \\[6pt]
&= \begin{cases}
0 & & \text{for } t < 0, \\[6pt]
1 - \Big( \frac{\theta}{\theta + t} \Big)^n & & \text{for } t \geqslant 0. \\[6pt]
\end{cases}
\end{aligned}$$
Thus, the density has support over $t \geqslant 0$, where we have:
$$\begin{aligned}
f_T(t)
\equiv \frac{d F_T}{dt}(t)
&= - n \Big( - \frac{\theta}{(\theta + t)^2} \Big) \Big( \frac{\theta}{\theta + t} \Big)^{n-1} \\[6pt]
&= \frac{n \theta^n}{(\theta + t)^{n+1}}. \\[6pt]
\end{aligned}$$
Assuming that $n>2$, the mean-squared error of the estimator is therefore given by:
$$\begin{aligned}
\text{MSE}(\hat{\theta})
= \mathbb{E}(T^2)
&= \int \limits_0^\infty t^2 \frac{n \theta^n}{(\theta + t)^{n+1}} \ dt \\[6pt]
&= n \theta^n \int \limits_0^\infty \frac{t^2}{(\theta + t)^{n+1}} \ dt \\[6pt]
&= n \theta^n \int \limits_\theta^\infty \frac{(r-\theta)^2}{r^{n+1}} \ dr \\[6pt]
&= n \theta^n \int \limits_\theta^\infty \Big[ r^{-(n-1)} - 2 \theta r^{-n} + \theta^2 r^{-(n+1)} \Big] \ dr \\[6pt]
&= n \theta^n \Bigg[ -\frac{r^{-(n-2)}}{n-2} + \frac{2 \theta r^{-(n-1)}}{n-1} - \frac{\theta^2 r^{-n}}{n} \Bigg]_{r = \theta}^{r \rightarrow \infty} \\[6pt]
&= n \theta^n \Bigg[ \frac{\theta^{-(n-2)}}{n-2} - \frac{2 \theta^{-(n-2)}}{n-1} + \frac{\theta^{-(n-2)}}{n} \Bigg] \\[6pt]
&= n \theta^2 \Bigg[ \frac{1}{n-2} - \frac{2}{n-1} + \frac{1}{n} \Bigg] \\[6pt]
&= \theta^2 \cdot \frac{n(n-1) - 2n(n-2) + (n-1)(n-2)}{(n-1)(n-2)} \\[6pt]
&= \theta^2 \cdot \frac{n^2 - n - 2n^2 + 4n + n^2 - 3n + 2}{(n-1)(n-2)} \\[6pt]
&= \frac{2\theta^2}{(n-1)(n-2)}. \\[6pt]
\end{aligned}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/376060",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
Integrate $\int_{-\infty}^{\infty}\frac{1}{2\pi}e^{(-\frac{1}{2}(\frac{x^2}{4}+4y^2))} dy$ I'm trying to integrate $\int_{-\infty}^{\infty}\frac{1}{2\pi}e^{(-\frac{1}{2}(\frac{x^2}{4}+4y^2))} dy$ using the fact that the integral of any normal PDF is 1. But I'm having trouble completing the square for $(\frac{x^2}{4} + 4y^2)$. Can anyone help me please? Thanks!
| You don't need to complete a square. You only integrate over $y$, not $x$, so you treat $x$ as a constant.
$$ \int_{-\infty}^{\infty}\frac{1}{2\pi}e^{(-\frac{1}{2}(\frac{x^2}{4}+4y^2))} dy
= \frac{1}{2\pi}e^{-\frac{x^2}{8}}\int_{-\infty}^{\infty}e^{-2y^2} dy. $$
Multiply and divide by the same factor that turns the integral into an integral over
the PDF of a normal distribution with mean $0$ and variance $\frac{1}{4}$, since this integral is one:
$$ = \frac{1}{2\pi}e^{-\frac{x^2}{8}}\sqrt{2\pi\times\frac{1}{4}}\underbrace{\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\times\frac{1}{4}}}e^{-\frac{y^2}{2\times\frac{1}{4}}} dy}_{=1}
= \frac{1}{2\pi}e^{-\frac{x^2}{8}}\sqrt{2\pi\times\frac{1}{4}}
= \frac{1}{2\sqrt{2\pi}}e^{-\frac{x^2}{8}}.$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/406644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Compute the quantile function of this distribution I want to compute the quantile function of a Bin(4, 1/3) random variable and plot it with R, but I don't know where to start from.
Any help or hint is highly appreciated.
| Note that the probability mass function is
$$
f(x) = P(X = x) =
\begin{cases}
\frac{16}{81}&\text{if } x=0\\
\frac{32}{81}&\text{if } x=1\\
\frac{24}{81}&\text{if } x=2\\
\frac{8}{81}&\text{if } x=3\\
\frac{1}{81}&\text{if } x=4,
\end{cases}
$$
and the cumulative distribution function is
$$
F(x) =
\begin{cases}
0 & \text{if } x<0\\
\frac{16}{81} &\text{if } 0\leq x < 1\\
\frac{48}{81} &\text{if } 1\leq x < 2\\
\frac{72}{81} &\text{if } 2\leq x < 3\\
\frac{80}{81} &\text{if } 3\leq x < 4\\
1 &\text{if } 4\leq x.
\end{cases}
$$
The quantile function $Q$ is thus
$$
Q(p) =
\begin{cases}
0 & \text{if } 0< p \leq \frac{16}{81}\\
\cdots & \cdots\\
4 & \text{if } \frac{80}{81}< p.
\end{cases}
$$
I leave the rest, e.g. the $\cdots$, to you.
R code
plot(1, xlim=c(0,1), ylim=c(-1,5), type="n",
xlab= "p", ylab= "Q(p)", main = "X ~ Bin(4,1/3)")
abline(v=c(0,1), lwd=1, lty=2, col="lightgray")
segments(x0=0,x1=16/81,y0=0,y1=0, lwd=2)
points(x=16/81,y=0, pch=20, cex=1.4)
#+++++++
segments(x0=16/81,x1=48/81,y0=1,y1=1, lwd=2)
points(x=48/81,y=1, pch=20, cex=1.4)
#+++++++
segments(x0=48/81,x1=72/81,y0=2,y1=2, lwd=2)
points(x=72/81,y=2, pch=20, cex=1.4)
#+++++++
segments(x0=72/81,x1=80/81,y0=3,y1=3, lwd=2)
points(x=80/81,y=3, pch=20, cex=1.4)
#+++++++
segments(x0=80/81,x1=81/81,y0=4,y1=4, lwd=2)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/593980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Simplify sum of combinations with same n, all possible values of k Is there a way to simplify this equation?
$$\dbinom{8}{1} + \dbinom{8}{2} + \dbinom{8}{3} + \dbinom{8}{4} + \dbinom{8}{5} + \dbinom{8}{6} + \dbinom{8}{7} + \dbinom{8}{8}$$
Or more generally,
$$\sum_{k=1}^{n}\dbinom{n}{k}$$
| See
http://en.wikipedia.org/wiki/Combination#Number_of_k-combinations_for_all_k
which says
$$ \sum_{k=0}^{n} \binom{n}{k} = 2^n$$
You can prove this using the binomial theorem where $x=y=1$.
Now, since $\binom{n}{0} = 1$ for any $n$, it follows that
$$ \sum_{k=1}^{n} \binom{n}{k} = 2^n - 1$$
In your case $n=8$, so the answer is $2^8 - 1 = 255$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/27266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 2,
"answer_id": 0
} |
PDF of Sum of Two Random Variables $X$ and $Y$ are uniformly distributed on the unit disk. Thus,
$f_{X,Y}(x,y) = \begin{cases} \frac{1}{\pi}, & \text{if} ~ x^2+y^2 \leq 1,\\
0, &\text{otherwise.}\end{cases}$
If $Z=X+Y$, find the pdf of $Z$.
| Following Dilip's hint, the geometry of the circular segment gives
$$
F_Z(z) = \begin{cases} 0 & \text{if} \quad -\infty< z\leq -\sqrt{2} \\
\frac{1}{\pi} \left(\cos^{-1}\left(-\frac{z\sqrt{2}}{2}\right) + \frac{z\sqrt{2}}{2} \sqrt{1-\frac{z^2}{2}}\right) & \text{if} \quad-\sqrt{2}<z\leq 0 \\
1 - \frac{1}{\pi} \left(\cos^{-1}\left(\frac{z\sqrt{2}}{2}\right) - \frac{z\sqrt{2}}{2} \sqrt{1-\frac{z^2}{2}}\right) & \text{if} \quad 0<z<\sqrt{2} \\
1 & \text{if} \quad \sqrt{2}\leq z<\infty\quad
\end{cases}
$$
which can be checked by this Monte Carlo simulation coded in R.
N <- 10^5
u <- runif(N, min = -1, max = 1)
v <- runif(N, min = -1, max = 1)
inside <- (u^2 + v^2 <= 1)
x <- u[inside]
y <- v[inside]
accepted <- sum(inside)
plot(x, y, xlim = c(-1, 1), ylim = c(-1, 1), pch = ".")
z = -0.9
sum(x + y <= z) / accepted
(acos(-z*sqrt(2)/2) + (z*sqrt(2)/2) * sqrt(1 - z^2/2)) / pi
z = 1.2
sum(x + y <= z) / accepted
1 - (acos(z*sqrt(2)/2) - (z*sqrt(2)/2) * sqrt(1 - z^2/2)) / pi
P.S. As Dilip's comment below show, it is much easier to work directly with the pdfs, because
$$
f_Z(z)=\frac{\sqrt{2}}{2}f_X\left(\frac{z\sqrt{2}}{2}\right) \, ,
$$
and
$$
f_X(x)=\frac{2}{\pi}\sqrt{1-x^2}\;I_{[-1,1]}(x) \, .
$$
To adjust the indicator, notice that $-1\leq z\sqrt{2}/2\leq 1$ if and only if $-\sqrt{2}\leq z\leq \sqrt{2}$. Hence, $I_{[-1,1]}(z\sqrt{2}/2)=I_{[-\sqrt{2},\sqrt{2}]}(z)$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/94465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Convexity of Function of PDF and CDF of Standard Normal Random Variable Please provide proof that $Q\left(x\right)=x^{2}+x\frac{\phi\left(x\right)}{\Phi\left(x\right)}$ is convex $\forall x>0 $. Here, $\phi$ and $\mathbf{\Phi}$ are the standard normal PDF and CDF, respectively.
STEPS TRIED
1) CALCULUS METHOD
I have tried the calculus method and have a formula for the second derivate, but am not able to show that it is positive $\forall x > 0$. Please let me know if you need any further details.
Finally,
\begin{eqnarray*}
\text{Let }Q\left(x\right)=x^{2}+x\frac{\phi\left(x\right)}{\Phi\left(x\right)}
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial Q\left(x\right)}{\partial x} & = & 2x+x\left[-\frac{x\phi\left(x\right)}{\Phi\left(x\right)}-\left\{ \frac{\phi\left(x\right)}{\Phi\left(x\right)}\right\} ^{2}\right]+\frac{\phi\left(x\right)}{\Phi\left(x\right)}
\end{eqnarray*}
\begin{eqnarray*}
\left.\frac{\partial Q\left(x\right)}{\partial x}\right|_{x=0} & = & \frac{\phi\left(0\right)}{\Phi\left(0\right)}>0
\end{eqnarray*}
\begin{eqnarray*}
\frac{\partial^{2}Q\left(x\right)}{\partial x^{2}} & = & 2+x\phi\left(x\right)\left[\frac{-\Phi^{2}\left(x\right)+x^{2}\Phi^{2}\left(x\right)+3x\phi\left(x\right)\Phi\left(x\right)+2\phi^{2}\left(x\right)}{\Phi^{3}\left(x\right)}\right]+2\left[-x\frac{\phi\left(x\right)}{\Phi\left(x\right)}-\left\{ \frac{\phi\left(x\right)}{\Phi\left(x\right)}\right\} ^{2}\right]
\end{eqnarray*}
\begin{eqnarray*}
& = & 2+\phi\left(x\right)\left[\frac{x^{3}\Phi^{2}\left(x\right)+3x^{2}\phi\left(x\right)\Phi\left(x\right)+2x\phi^{2}\left(x\right)-3x\Phi^{2}\left(x\right)-2\phi\left(x\right)\Phi\left(x\right)}{\Phi^{3}\left(x\right)}\right]\\
\end{eqnarray*}
\begin{eqnarray*}
& = & \left[\frac{2\Phi^{3}\left(x\right)+x^{3}\Phi^{2}\left(x\right)\phi\left(x\right)+3x^{2}\phi^{2}\left(x\right)\Phi\left(x\right)+2x\phi^{3}\left(x\right)-3x\Phi^{2}\left(x\right)\phi\left(x\right)-2\phi^{2}\left(x\right)\Phi\left(x\right)}{\Phi^{3}\left(x\right)}\right]
\end{eqnarray*}
\begin{eqnarray*}
\text{Let, }K\left(x\right)=2\Phi^{3}\left(x\right)+2x\phi^{3}\left(x\right)+\Phi^{2}\left(x\right)\phi\left(x\right)x\left[x^{2}-3\right]+\phi^{2}\left(x\right)\Phi\left(x\right)\left[3x^{2}-2\right]
\end{eqnarray*}
\begin{eqnarray*}
K\left(0\right)=\frac{1}{4}-\frac{1}{2\pi}>0
\end{eqnarray*}
For $x\geq\sqrt{3},K\left(x\right)>0$. For $x\in\left(0,\sqrt{3}\right)$,
\begin{eqnarray*}
K'\left(x\right) & = & 6\Phi^{2}\left(x\right)\phi\left(x\right)+2\phi^{3}\left(x\right)-6x^{2}\phi^{3}\left(x\right)+2\Phi\left(x\right)\phi^{2}\left(x\right)\left[x^{3}-3x\right]-\Phi^{2}\left(x\right)\phi\left(x\right)\left[x^{4}-3x^{2}\right]+\Phi^{2}\left(x\right)\phi\left(x\right)\left[3x^{2}-3\right]\\
& & -2\phi^{2}\left(x\right)\Phi\left(x\right)\left[3x^{3}-2x\right]+\phi^{3}\left(x\right)\left[3x^{2}-2\right]+\phi^{2}\left(x\right)\Phi\left(x\right)6x
\end{eqnarray*}
\begin{eqnarray*}
K'\left(x\right) & = & 6\Phi^{2}\left(x\right)\phi\left(x\right)-3\Phi^{2}\left(x\right)\phi\left(x\right)+2\phi^{3}\left(x\right)-2\phi^{3}\left(x\right)+6x\Phi\left(x\right)\phi^{2}\left(x\right)-6x\Phi\left(x\right)\phi^{2}\left(x\right)+3x^{2}\Phi^{2}\left(x\right)\phi\left(x\right)+3x^{2}\Phi^{2}\left(x\right)\phi\left(x\right)\\
& & +2x^{3}\Phi\left(x\right)\phi^{2}\left(x\right)-6x^{3}\Phi\left(x\right)\phi^{2}\left(x\right)+3x^{2}\phi^{3}\left(x\right)-6x^{2}\phi^{3}\left(x\right)+4x\Phi\left(x\right)\phi^{2}\left(x\right)-x^{4}\Phi^{2}\left(x\right)\phi\left(x\right)
\end{eqnarray*}
\begin{eqnarray*}
& = & 3\Phi^{2}\left(x\right)\phi\left(x\right)+6x^{2}\Phi^{2}\left(x\right)\phi\left(x\right)+4x\Phi\left(x\right)\phi^{2}\left(x\right)-3x^{2}\phi^{3}\left(x\right)-x^{4}\Phi^{2}\left(x\right)\phi\left(x\right)-4x^{3}\Phi\left(x\right)\phi^{2}\left(x\right)
\end{eqnarray*}
\begin{eqnarray*}
=\phi\left(x\right)\left[3\Phi^{2}\left(x\right)+x\left\{ 6x\Phi^{2}\left(x\right)-3x\phi^{2}\left(x\right)-x^{3}\Phi^{2}\left(x\right)+4\Phi\left(x\right)\phi\left(x\right)\left[1-x^{2}\right]\right\} \right]
\end{eqnarray*}
2) GRAPHICAL / NUMERICAL METHOD
I was also able to see this numerically and visually by plotting the graphs as shown below; but it would be helpful to have a proper proof.
| Let's show the second derivative of $Q$ is positive for $x \ge 0$. First, we need to know how to differentiate $\Phi$ and $\phi$.
By definition,
$$\frac{d}{dx}\Phi(x) = \phi(x) = \frac{1}{\sqrt{2\pi}} \exp(-x^2/2).$$
Differentiating once more gives
$$\frac{d}{dx}\phi(x) = -x \phi(x).$$
Applying this result to another derivative yields
$$\frac{d^2}{dx^2}\phi(x) = (-1 + x^2)\phi(x).$$
Using these results, along with the usual product and quotient rules of differentiation, we find the numerator of the second derivative is the sum of six terms. (This result was obtained around the middle of the question.) It is convenient to arrange the terms into three groups:
$$\eqalign{
\Phi(x)^3\frac{d^2}{dx^2}Q(x)= &2 x \phi(x)^3 \\
&+\,3 x^2 \phi(x)^2 \Phi(x)+x^3 \phi(x) \Phi(x)^2 \\
&+\,\Phi(x) \left(-2 \phi(x)^2-3 x \phi(x) \Phi(x)+2 \Phi(x)^2\right).
}$$
Because $\phi$ is a probability density, it is nonnegative and so is the distribution function $\Phi$. Thus only the third term could possibly be negative when $x\ge 0$. Its sign is the same as that of its second factor,
$$R(x) = -2 \phi(x)^2-3 x \phi(x) \Phi(x)+2 \Phi(x)^2.$$
There are many ways to show this factor cannot be negative. One is to note that
$$R(0) = -2\phi(0) + 2\Phi(0) = 1 - \sqrt{\frac{2}{\pi}} \gt 0.$$
Differentiation--using the same simple techniques as before--gives
$$\frac{d}{dx} R(x) = \phi(x)(x \phi(x) + (1 + 3x^2)\Phi(x))$$
which is plainly positive for $x\ge 0$. Therefore $R(x)$ is an increasing function on the interval $[0, \infty)$. Its minimum must be at $R(0) \gt 0$, proving $R(x)\gt 0$ for all $x \ge 0$.
We have shown $Q$ has positive second derivative for $x \ge 0$, QED.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/158042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 1,
"answer_id": 0
} |
If $p(A,B,C,D) = p(A,B) \cdot p(C,D)$, then is $p(A \mid B,C,D) = p(A \mid B)$? Given the discrete random variables $A,B,C,$ and $D$, if
$$
p(A = a,B = b,C = c,D = d) = p(A = a,B = b) \cdot p(C = c,D = d) \ \forall a,b,c,d
$$
then is
$$
p(A = a \mid B = b,C = c,D = d) = p(A = a \mid B = b)
$$
and
$$
p(B = b \mid A = a,C = c,D = d) = p(B = b \mid A = a)
$$
for all $a,b,c,$ and $d$?
| \begin{align}
p(A = a \mid B = b,C = c,D = d) &= \frac{p(A = a,B = b,C = c,D = d)}{\sum_{a} p(A = a,B = b,C = c,D = d)} \\
&= \frac{p(A = a,B = b) \cdot p(C = c,D = d)}{\sum_{a} p(A = a,B = b) \cdot p(C = c,D = d)} \\
&= \frac{p(A = a,B = b) \cdot p(C = c,D = d)}{p(C = c,D = d) \cdot p(B = b)} \\
&= \frac{p(A = a,B = b)}{p(B = b)} \\
&= p(A = a \mid B = b)
\end{align}
The proof for
$$
p(B = b \mid A = a,C = c,D = d) = p(B = b \mid A = a)
$$
follows in a similar way.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/557567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Poisson process - calls arriving Already posted on MSE. Had no answer, so will post here.
Assume the number of calls per hour arriving at an answering service follows a Poisson process with $\lambda = 4$.
Question: If it is know that $8$ calls came in the first two hours. What is the probability that exactly five arrived in the first hour?
Attempt: Isn't this just a combinatorial question? So the answer is ${8 \choose 5}/2^8$
| Thinking this through, I believe this should be calculated with a binomial distribution with $n = 8$ and $p = 0.5$ as follows:
$P = \binom{8}{5} \cdot 0.5^{5} \cdot (1-0.5)^{3} $
Let me try to proof this:
Let
$X_1$ = number of calls that arrive in the first hour
$X_2$ = number of calls that arrive in the second hour
$X_3$ = number of calls that arrive in the two hours
What you want to calculate is the conditional probability of 5 calls arriving in the first hour given that 8 calls arrived in two hours:
$P(X_1 = 5 | X_3 = 8) = \frac {P[(X_1 = 5) \cap (X_3 = 8)]} {P(X_3 = 8)}$
This would be equivalent to : $\frac {P[(X_1 = 5) \cap (X_2 = 3)]} {P(X_3 = 8)}$, however now the events occur over non overlapping time frames which allow us to use the independent increment property of the poisson processes.
$\frac {P[(X_1 = 5) \cap (X_2 = 3)]} {P(X_3 = 8)} = \frac {P(X_1 = 5) \cdot P(X_2 = 3)]} {P(X_3 = 8)}$
$ =\frac {\left[\frac {e^{-4} \cdot 4^5} {5!} \right] \cdot \left[\frac {e^{-4} \cdot 4^3} {3!} \right]} {\frac {e^{-(4 \cdot 2)} \cdot {(4 \cdot 2)}^8} {8!}} $
$=\frac{8!} {5! \cdot 3!} \frac {(4^5) \cdot (4^3)} {8^8} $
$=\frac{8!} {5! \cdot 3!} \frac {(4^5) \cdot (4^3)} {(8^5) \cdot (8^3)} $
$=\frac{8!} {5! \cdot 3!} \cdot \left(\frac {4} {8}\right)^5 \cdot \left(\frac {4} {8}\right)^3$
$= \binom{8}{5} \cdot 0.5^{5} \cdot (0.5)^{3} $
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/74838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Regular Transition Matrix
A transition matrix is regular if some power of the matrix contains
all positive entries. [1]
Its powers have all positive entries...
Why isn't this matrix a Regular Transition Matrix?
Reference:
[1] http://fazekas-andras-istvan.hu/9_11_markov_lancok/DFAI_MARKOV_CHAINS_02.pdf
| It can be shown that (either by theoretical derivation or using Matlab) $B$ has the Jordan canonical form decomposition as follows:
$$B = PJP^{-1},$$
where
\begin{align*}
P = \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix},
\end{align*}
\begin{align*}
J = \begin{bmatrix}
0.5 & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}.
\end{align*}
Hence for each natural number $n$, we have:
\begin{align*}
B^n = & (PJP^{-1})^n = PJ^nP^{-1} \\
= & \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
0.5^n & 0 & 0 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}
\begin{bmatrix}
1 & 0 & -1 \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix} \\
= &
\begin{bmatrix}
0.5^n & 0 & 1 - 0.5^n \\
0 & 1 & 0 \\
0 & 0 & 1
\end{bmatrix}.
\end{align*}
It thus can be seen that not every entry of $B^n$ is positive, hence $B$ is not a regular transition matrix.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/253931",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to put an ARMA(2,2) model in state-space form I am interested in an ARMA$(2,2)$ model with an additional input variable, which I want to put in state-space form. If $w_t$ is white noise, and $x_t$ is a known input, the model is given by:
$$y_t = \beta_0 + \beta_1 \cdot x_{t-1} + \alpha_1 \cdot y_{t-1} + \alpha_2 \cdot y_{t-2} + w_t + \theta_1 \cdot w_{t-1} + \theta_2 \cdot w_{t-2}.$$
Can someone please show how to write this in state-space form?
| One way to do it is to define the state vector as
$$
\xi_t = \begin{pmatrix}
y_t \\
y_{t-1} \\
w_{t} \\
w_{t-1} \\
1 \\
\end{pmatrix}
$$
The measurement equation is just
$$
y_t = \begin{pmatrix}
1 & 0 & 0 & 0 & 0
\end{pmatrix} \, \xi_t
$$
i.e. there is no noise term. The state transition equation is then
$$
\underbrace{\begin{pmatrix}
y_t \\
y_{t-1} \\
w_{t} \\
w_{t-1} \\
1 \\
\end{pmatrix}}_{\xi_t}
=
\begin{pmatrix}
\alpha_1 & \alpha_2 & \theta_1 & \theta_2 & \beta_0+\beta_1 x_{t-1} \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0 & 1 \\
\end{pmatrix}
\underbrace{\begin{pmatrix}
y_{t-1} \\
y_{t-2} \\
w_{t-1} \\
w_{t-2} \\
1 \\
\end{pmatrix}}_{\xi_{t-1}}
+
\begin{pmatrix}
1 \\
0 \\
1 \\
0 \\
0 \\
\end{pmatrix} w_t
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/338910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to calculate the Transposed Convolution? Studying for my finals in Deep learning. I'm trying to solve the following question:
Calculate the Transposed Convolution of input $A$ with kernel $K$:
$$
A=\begin{pmatrix}1 & 0 & 1\\
0 & 1 & 0\\
1 & 0 & 1
\end{pmatrix},\quad K=\begin{pmatrix}1 & 0\\
1 & 1
\end{pmatrix}
$$
I can't seem to find the formula which is used to calculate the Transposed Convolution (found only the formula to calculate the dimension). I know that the Convolution formula is:
$$
G(i,j)=\sum_{u=-k}^{k}\sum_{v=-k}^{k}K(u,v)A(i-u,j-v)
$$
But how to calculate the Transposed Convolution?
In a video I saw the following example:
Which is easy for $2\times2$ kernel and image to see that:
$$
\begin{align*}
&K_{0,0}\star^{T}A=2\star^{T}\begin{pmatrix}3 & 1\\
1 & 5
\end{pmatrix}=\begin{pmatrix}6 & 2 & 0\\
2 & 10 & 0\\
0 & 0 & 0
\end{pmatrix}&&K_{0,1}\star^{T}A=0\star^{T}\begin{pmatrix}3 & 1\\
1 & 5
\end{pmatrix}=\begin{pmatrix}0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0
\end{pmatrix}\\&K_{0,1}\star^{T}A=4\star^{T}\begin{pmatrix}3 & 1\\
1 & 5
\end{pmatrix}=\begin{pmatrix}0 & 12 & 4\\
0 & 4 & 20\\
0 & 0 & 0
\end{pmatrix}&&K_{1,1}\star^{T}A=1\star^{T}\begin{pmatrix}3 & 1\\
1 & 5
\end{pmatrix}=\begin{pmatrix}0 & 0 & 0\\
0 & 3 & 1\\
0 & 1 & 5
\end{pmatrix}
\end{align*}
$$
Then you have:
$$
A'=\begin{pmatrix}6 & 2 & 0\\
2 & 10 & 0\\
0 & 0 & 0
\end{pmatrix}+\begin{pmatrix}0 & 12 & 4\\
0 & 4 & 20\\
0 & 0 & 0
\end{pmatrix}+\begin{pmatrix}0 & 0 & 0\\
0 & 0 & 0\\
0 & 0 & 0
\end{pmatrix}+\begin{pmatrix}0 & 0 & 0\\
0 & 3 & 1\\
0 & 1 & 5
\end{pmatrix}=\begin{pmatrix}6 & 14 & 4\\
2 & 17 & 21\\
0 & 1 & 5
\end{pmatrix}
$$
But I can't seem to figure how to make it for $3\times 3$ image and $2\times 2$ kernel.
I do know that the dim of the output should be $4\times 4$ because:
$$
\begin{cases}
H=(3-1)\cdot1+2-2\cdot0=4\\
W=(3-1)\cdot1+2-2\cdot0=4
\end{cases}
$$
| In 3x3 images, just like in 2x2, each pixel of the input is multiplied with the kernel matrix and accumulated. A naive implementation is given by this page:
def trans_conv(X, K):
h, w = K.shape
Y = np.zeros((X.shape[0] + h - 1, X.shape[1] + w - 1))
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Y[i: i + h, j: j + w] += X[i, j] * K
return Y
So, every $h\times w$ block of output matrix starting at $y_{ij}$ is accumulated with $K\times x_{ij}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/587634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to easily determine the results distribution for multiple dice? I want to calculate the probability distribution for the total of a combination of dice.
I remember that the probability of is the number of combinations that total that number over the total number of combinations (assuming the dice have a uniform distribution).
What are the formulas for
*
*The number of combinations total
*The number of combinations that total a certain number
| Exact solutions
The number of combinations in $n$ throws is of course $6^n$.
These calculations are most readily done using the probability generating function for one die,
$$p(x) = x + x^2 + x^3 + x^4 + x^5 + x^6 = x \frac{1-x^6}{1-x}.$$
(Actually this is $6$ times the pgf--I'll take care of the factor of $6$ at the end.)
The pgf for $n$ rolls is $p(x)^n$. We can calculate this fairly directly--it's not a closed form but it's a useful one--using the Binomial Theorem:
$$p(x)^n = x^n (1 - x^6)^n (1 - x)^{-n}$$
$$= x^n \left( \sum_{k=0}^{n} {n \choose k} (-1)^k x^{6k} \right) \left( \sum_{j=0}^{\infty} {-n \choose j} (-1)^j x^j\right).$$
The number of ways to obtain a sum equal to $m$ on the dice is the coefficient of $x^m$ in this product, which we can isolate as
$$\sum_{6k + j = m - n} {n \choose k}{-n \choose j}(-1)^{k+j}.$$
The sum is over all nonnegative $k$ and $j$ for which $6k + j = m - n$; it therefore is finite and has only about $(m-n)/6$ terms. For example, the number of ways to total $m = 14$ in $n = 3$ throws is a sum of just two terms, because $11 = 14-3$ can be written only as $6 \cdot 0 + 11$ and $6 \cdot 1 + 5$:
$$-{3 \choose 0} {-3 \choose 11} + {3 \choose 1}{-3 \choose 5}$$
$$= 1 \frac{(-3)(-4)\cdots(-13)}{11!} + 3 \frac{(-3)(-4)\cdots(-7)}{5!}$$
$$= \frac{1}{2} 12 \cdot 13 - \frac{3}{2} 6 \cdot 7 = 15.$$
(You can also be clever and note that the answer will be the same for $m = 7$ by the symmetry 1 <--> 6, 2 <--> 5, and 3 <--> 4 and there's only one way to expand $7 - 3$ as $6 k + j$; namely, with $k = 0$ and $j = 4$, giving
$$ {3 \choose 0}{-3 \choose 4} = 15 \text{.}$$
The probability therefore equals $15/6^3$ = $5/36$, about 14%.
By the time this gets painful, the Central Limit Theorem provides good approximations (at least to the central terms where $m$ is between $\frac{7 n}{2} - 3 \sqrt{n}$ and $\frac{7 n}{2} + 3 \sqrt{n}$: on a relative basis, the approximations it affords for the tail values get worse and worse as $n$ grows large).
I see that this formula is given in the Wikipedia article Srikant references but no justification is supplied nor are examples given. If perchance this approach looks too abstract, fire up your favorite computer algebra system and ask it to expand the $n^{\text{th}}$ power of $x + x^2 + \cdots + x^6$: you can read the whole set of values right off. E.g., a Mathematica one-liner is
With[{n=3}, CoefficientList[Expand[(x + x^2 + x^3 + x^4 + x^5 + x^6)^n], x]]
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/3614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 10,
"answer_id": 8
} |
Combined distribution of beta and uniform variables Given
$$X \sim \text{Beta}(\alpha,\beta)$$ (where $\alpha=\beta$, if that helps) and
$$\theta \sim \text{Uniform}(0, \pi/2).$$
I'm trying to find a formula for $P(Y)$ (or even the cdf) of
$$Y = X + (1-2X)\cos(\theta)$$
on the domain $(0,1)$.
I know from here that given $C = \cos(\theta),$ its PDF is
$$P(C) = \frac{2}{\pi\sqrt{1-C^2}}$$
and of course the pdf of a beta distribution is
$$P(X) = \frac{X^{\alpha-1}(1-X)^{\beta-1}}{\mathrm{Beta}(\alpha, \beta)}.$$
But combining them is getting beyond my skills as an engineer.
EDIT: As it seems a closed form is not possible for this, could someone please show me how to formulate the integral to calculate this sort of PDF? There may be some way I can reformulate the problem to be more solution-friendly if I could wrap my head around how compound distributions of this type are built mathematically.
| There is no closed form for the density. Its integral form can be obtained as follows. If we condition on $X=x$ we have $Y = x + (1-2x) C$ where $C$ ranges over the unit interval. The support under this condition is:
$$\text{supp}(Y|X=x) = \begin{cases}
[x,1-x] & & & \text{for } 0 \leqslant x < \tfrac{1}{2}, \\[6pt]
[1-x,x] & & & \text{for } \tfrac{1}{2} < x \leqslant 1. \\[6pt]
\end{cases}$$
(We can ignore the case where $x=\tfrac{1}{2}$ since this occurs with probability zero.) Over this support we have the conditional density:
$$\begin{aligned}
p_{Y|X}(y|x)
&= \frac{1}{|1-2x|} \cdot p_C \bigg( \frac{y-x}{1-2x} \bigg) \\[6pt]
&= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( 1 - \bigg( \frac{y-x}{1-2x} \bigg)^2 \bigg)^{-1/2} \\[6pt]
&= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-2x)^2 - (y-x)^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt]
&= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{(1-4x+4x^2) - (y^2-2xy+x^2)}{(1-2x)^2} \bigg)^{-1/2} \\[6pt]
&= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \bigg( \frac{1 - 4x + 3x^2 + 2xy - y^2}{(1-2x)^2} \bigg)^{-1/2} \\[6pt]
&= \frac{1}{|1-2x|} \cdot \frac{2}{\pi} \cdot \frac{|1-2x|}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \\[6pt]
&= \frac{2}{\pi} \cdot \frac{1}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}}. \\[6pt]
\end{aligned}$$
Inverting the support we have $\text{supp}(X|Y=y) =
[0,\min(y,1-y)] \cap [\max(y,1-y),1]$. Thus, applying the law of total probability then gives you:
$$\begin{aligned}
p_Y(y)
&= \int \limits_0^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt]
&= \int \limits_0^{\min(y,1-y)} p_{Y|X}(y|x) p_X(x) \ dx
+ \int \limits_{\max(y,1-y)}^1 p_{Y|X}(y|x) p_X(x) \ dx \\[6pt]
&= \frac{2}{\pi} \frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha) \Gamma(\beta)} \Bigg[ \quad \int \limits_0^{\min(y,1-y)} \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx \\
&\quad \quad \quad \quad \quad \quad \quad \quad + \int \limits_{\max(y,1-y)}^1 \frac{x^{\alpha-1} (1-x)^{\beta-1}}{\sqrt{1 - 4x + 3x^2 + 2xy - y^2}} \ dx
\Bigg]. \\[6pt]
\end{aligned}$$
There is no closed form for this integral so it must be evaluated using numerical methods.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/323742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
Using Chebyshev's inequality to obtain lower bounds Let $X_1$ and $X_2$ be i.i.d. continuous random variables with pdf $f(x) = 6x(1-x), 0<x<1$ and $0$, otherwise.
Using Chebyshev's inequality, find the lower bound of $P\left(|X_1 + X_2-1| \le\frac{1}{2}\right)$
What I did:
Using as Chebyshev's inequality,
$P(|X-\mu|\ge a)\le \frac{\sigma^2}{a^2}$
Where $a=\frac{1}{2}$
Finding the variance: $E[X^2] - (E[X])^2$
$ E[X] = \int_{0}^{1} x6x(1-x) dx$ $=6\left[\frac{x^ 3}{3}-\frac{x^4}{4}\right]_0^1=6\left[\frac{1}{3}-\frac{1}{4}\right]= \frac{6}{12}=\frac{1}{2}$
$ E[X^2]= \int_{0}^{1}x^{2}6x(1-x)dx=6\left[\frac{x^ 4}{4}-\frac{x^5}{5}\right]_0^1=6\left[\frac{1}{4}-\frac{1}{5}\right]=\frac{6}{20}$
Therefore, $\sigma^2=\frac{6}{20}-\left(\frac{1}{2}\right)^2=\frac{1}{20}$
Putting in Chebyshev's inequality,
$\frac{\sigma^2}{a^2} $= $\left[\frac{\frac{1}{20}}{\left(\frac{1}{2}\right)^2}\right]$=$\frac{4}{20}=\frac{1}{5} $
But what we need is $\le \frac{1}{2}$ which we get by $1-\frac{1}{5}=\frac{4}{5}$,
But the answer is $\frac{3}{5}$
Where am I going wrong?
| You dropped a 6. It should be $E[X^2]=6/20=3/10$.
$$P(|X_1+X_2-1|\leq 1/2)=1-P(|X_1+X_2-1|>1/2).$$
$$P(|X_1+X_2-1|>1/2)=P(X_1^2+X_2^2+2X_1X_2-2X_1-2X_2+1>1/4).$$
$$P(|X_1+X_2-1|^2>1/4)\leq \frac{6/10+2(1/2)^2-1-1+1}{1/4}=4/10.$$
Thus
$$P(|X_1+X_2-1|\leq 1/2)\geq 1-4/10=3/5$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/324862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Simulating ratio of two independent normal variables I have two non centered independent normally distributed variables $X$ and $Y$. Specifically
$$ X \sim N(\mu_1, \sigma_1)$$
$$ Y \sim N(\mu_2, \sigma_2)$$
I am interested in sampling from the ratio $X/Y$. I know that the ratio is Cauchy distributed if the denominator distribution is centered at 0 which mine is not. My question is is there an approximate distribution that I can sample the ratio $X/Y$ from using the means and standard deviations from the normal distributions?
| An exact expression for the ratio of two correlated normal random variables was given by Hinkley in 1969. The notation is taken from the stackexchange post
For $Z = \frac{X}{Y}$ with $$
\begin{bmatrix}X\\Y\end{bmatrix} \sim N\left(\begin{bmatrix} \mu_x \\ \mu_y \end{bmatrix} , \begin{bmatrix} \sigma_x^2 & \rho \sigma_x \sigma_y \\ \rho \sigma_x \sigma_y & \sigma_y^2 \end{bmatrix} \right)
$$
The exact probability density function (pdf) result is:
$$f(z) = \frac{b(z)d(z)}{a(z)^3} \frac{1}{\sqrt{2\pi} \sigma_X\sigma_Y} \left[ \Phi \left( \frac{b(z)}{\sqrt{1-\rho^2}a(z)} \right) - \Phi \left( - \frac{b(z)}{\sqrt{1-\rho^2}a(z)} \right) \right] + \frac{\sqrt{1-\rho^2}}{\pi \sigma_X \sigma_Y a(z)^2} \exp \left( -\frac{c}{2(1-\rho^2)}\right)
$$ with $$ \begin{array}{}
a(z) &=& \left( \frac{z^2}{\sigma_X^2} - \frac{2 \rho z}{\sigma_X \sigma_Y} + \frac{1}{\sigma_Y^2} \right) ^{\frac{1}{2}} \\
b(z) &=& \frac{\mu_X z}{ \sigma_X^2} - \frac{\rho (\mu_X+ \mu_Y z)}{ \sigma_X \sigma_Y} + \frac{\mu_Y}{\sigma_Y^2} \\
c &=& \frac{\mu_X^2}{\sigma_Y^2} - \frac{2 \rho \mu_X \mu_Y + }{\sigma_X \sigma_Y} + \frac{\mu_Y^2}{\sigma_Y^2}\\
d(z) &=& \text{exp} \left( \frac {b(z)^2 - c a(z) ^2}{2(1-\rho^2)a(z)^2}\right)
\end{array}$$
And an approximation of the CDF based on an asymptotic behaviour is: (for $\theta_Y/\sigma_Y \to \infty$): $$
F(z) \to \Phi\left( \frac{z - \mu_X/\mu_Y}{\sigma_X \sigma_Y a(z)/\mu_Y} \right)
$$
You end up with the Delta method result when you insert the approximation $a(z) = a(\mu_X/\mu_Y)$ $$a(z) \sigma_X \sigma_Y /\mu_Y \approx a(\mu_X/\mu_Y) \sigma_X \sigma_Y /\mu_Y = \left( \frac{\mu_X^2\sigma_Y^2}{\mu_Y^4} - \frac{2 \mu_X \sigma_X \sigma_Y}{\mu_Y^3} + \frac{\sigma_X^2}{\mu_Y^2} \right) ^{\frac{1}{2}}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/355673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Is the time series $Y_t = \frac{1}{2} Y_{t-1} + \frac{1}{2} Y_{t-2} - \frac{1}{3} \epsilon_{t-1} + \epsilon_t$ stationary? How can I tell if the series $Y_t = \frac{1}{2} Y_{t-1} + \frac{1}{2} Y_{t-2} - \frac{1}{3} \epsilon_{t-1} + \epsilon_t$ is stationary?
| Letting $B$ represent the backshift operator, you can write this model in compact form as:
$$(1 - \tfrac{1}{2} B - \tfrac{1}{2} B^2) Y_t = (1 - \tfrac{1}{3} B) \epsilon_{t}.$$
The auto-regressive characteristic polynomial for the model can be factorised as:
$$\phi(B) = 1 - \tfrac{1}{2} B - \tfrac{1}{2} B^2 = (1 - \tfrac{1}{2}B)(1 - B).$$
From here it should be relatively simple to see if you have stationarity. Do you know the required conditions for this?
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/371948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Variance for exponential smoothing I want to obtain the analytical expression of variance for simple exponential smoothing . Please help verify and see if the expression could be further simplified , thanks .
Assume the discrete time process $(D_i)_{1 \le i\le t}$ is i.i.d , $(\hat{D}_i)_{1 \le i\le t}$ is the predictor .
The standard practice is to define the first smoothing factor as ,
$$ \hat{D}_1 = D_1 $$
Let $T_a \ge 0 $ . Observe for t > 1 ,
$$ \hat{D}_t = \left( \frac{1}{1+T_a} \right) D_t + \left( 1 - \frac{1}{1+T_a} \right) \hat{D}_{t-1}$$
$$ \hat{D}_t = \left( \frac{1}{1+T_a} \right) D_t + \left( 1 - \frac{1}{1+T_a} \right)
\left( \left( \frac{1}{1+T_a} \right) D_{t-1} +\left( 1 - \frac{1}{1+T_a} \right)
\left( ... \right) \right)$$
expanding out , and pulling out common term
$$ \hat{D}_t = \left( \frac{1}{1+T_a} \right) \left[ D_t + \left( 1 - \frac{1}{1+T_a} \right) D_{t-1} +
\left( 1 - \frac{1}{1+T_a} \right)^2 D_{t-2} + ... +
\left( 1 - \frac{1}{1+T_a} \right)^{t-2} D_{t-(t-2)} \right] $$
$$ + \left( 1 - \frac{1}{1+T_a} \right)^{t-1} D_1 $$
Then , by i.i.d property
$$ Var(\hat{D_t}) = \left(\frac{1}{1+T_a} \right)^2
\left[ 1 + \left( 1 - \frac{1}{1+T_a} \right)^2 + \left( 1 - \frac{1}{1+T_a} \right)^4 + ... +
\left( 1 - \frac{1}{1+T_a} \right)^{2(t-2)} \right] Var(D)
$$
$$ + \left( 1 - \frac{1}{1+T_a} \right)^{2(t-1)} Var(D) $$
Since $ 0 \le 1 - \frac{1}{1+T_a} < 1 $ , by formula for GP series
$$ Var(\hat{D_t}) = \left\{ \left( \frac{1}{1+T_a} \right)^2
\left[ \frac{1 - \left(1- \frac{1}{1+T_a} \right)^{2 (t-1)} }{1 - \left( 1- \frac{1}{1+T_a} \right)^2 } \right]
+ \left( 1 - \frac{1}{1+T_a} \right)^{2(t-1)} \right\} Var(D) $$
| Hi: Suppose that you have the following exponential smoothing model where the data, $y_t$, has variance $\sigma^2_y$ :
$\tilde y_t = (1-\lambda) \tilde y_{t-1} + \lambda y_t$.
Then it can be shown that $\sigma^2_{\tilde{y}_{t}} = \left(\frac{\lambda}{2-\lambda}\right)^2 \sigma^2_y$.
The proof is on page 72 of Box and Luceno's "Statistical Control by Monitoring and Feedback Adjustment".
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/572437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cauchy distribution (likelihood and Fisher information) I have a three part question:
1) If I have a $Cauchy(\theta, 1)$ with density:
$p(x-\theta) = \frac{1}{\pi\{1+{(x-\theta)^2}\}}$
and $x_1, ..., x_n$ forms i.i.d sample. I see on Wiki that this will make a likelihood function of:
$l(\theta) = -nlog\pi - \sum_{i=1}^{n} log(1+{(x_i-\theta)^2})$
But how is that derived?
2) What derivative rules would I use to show that:
$l'(\theta) = -2 \sum_{i=1}^{n} \frac{x_i-\theta}{1+(x_i-\theta)^2}$
(I feel this is probably straight-forward, but I am missing it)
3) How would I obtain that the Fisher information is equal to $I(\theta)=\frac{n}{2}$?
I started by:
$\frac{\partial^2f(x;\theta)}{2\theta^2} = \frac{8(x-\theta)^2}{\pi[1+(x-\theta)^2]^3} - \frac{2}{\pi[1+(x-\theta)^2]^2}$
I think that I next need to find the integral of:
$I(\theta) = -E[\frac{\partial^2f(x;\theta)}{2\theta^2}]$
But I cannot get this last step of reducing this information to $\frac{n}{2}$.
| The Fisher information for one observation is given by\begin{align*}I(\theta) &= -\mathbb{E}_\theta\left[\frac{\partial^2 \log f(X;\theta)}{\partial\theta^2}\right]\\ &=\mathbb{E}_\theta\left[ \frac{\partial^2 \log \{1+(X-\theta)^2\}}{\partial\theta^2}\right]\\
&=2\mathbb{E}_\theta\left[ -\frac{\partial }{\partial\theta}\frac{(X-\theta)}{1+(X-\theta)^2}\right]\\
&=2\mathbb{E}_\theta\left[\frac{1}{1+(X-\theta)^2}-\frac{2(X-\theta)^2}{[1+(X-\theta)^2]^2}\right]\\
&= \frac{2}{\pi}\int_\mathbb{R} \frac{1}{[1+(x-\theta)^2]^2}-\frac{2(x-\theta)^2}{[1+(x-\theta)^2]^3} \text{d}x\\
&= \frac{2}{\pi}\int_\mathbb{R} \frac{1}{[1+x^2]^2}-\frac{2x^2}{[1+x^2]^3} \text{d}x\\
&= \frac{2}{\pi}\int_\mathbb{R} \frac{1}{[1+x^2]^2}-\frac{2}{[1+x^2]^2}+\frac{2}{[1+x^2]^3} \text{d}x\\
&= \frac{2}{\pi}\int_\mathbb{R} \frac{-1}{[1+x^2]^2}+\frac{2}{[1+x^2]^3} \text{d}x
\end{align*}
because the integral (and the information) is translation invariant.
Now it is easy to establish a recurrence relation on$$I_k=\int_\mathbb{R} \frac{1}{[1+x^2]^k}\text{d}x$$Indeed
\begin{align*}
I_k &= \int_\mathbb{R} \frac{1+x^2}{[1+x^2]^{k+1}}\text{d}x\\
&= I_{k+1} + \int_\mathbb{R} \frac{2kx}{[1+x^2]^{k+1}}\frac{x}{2k}\text{d}x\\
&= I_{k+1} + \frac{1}{2k} \int_\mathbb{R} \frac{1}{[1+x^2]^{k}}\text{d}x
= I_{k+1} + \frac{1}{2k} I_k
\end{align*}
by an integration by parts. Hence
$$I_1=\pi\quad\text{and}\quad I_{k+1}=\frac{2k-1}{2k}I_k\quad k>1$$
which implies
$$I_1=\pi\quad I_2=\frac{\pi}{2}\quad I_3=\frac{3\pi}{8}$$
and which leads to the Fisher information:
$$I(\theta)=\frac{2}{\pi}\left\{-I_2+2I_3\right\}=\frac{2}{\pi}\left\{\frac{-\pi}{2}+\frac{3\pi}{4}\right\}=\frac{1}{2}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/145017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Correlation when adding balls to urn It is a pretty simple question, but I wanted to know if I got it right (and I had a weird result in the end):
There are 3 White balls and 1 Green ball in a urn. We take one ball from it, write down the colour then put it back with +c balls of the same colour. Now we take a second ball. $X_{i} = 1$ if the ith ball is green and $X_{i} = 0$ if it is white, for $i = 1,2$.
a) What is the joint probability distribution of $X_{1}$ and $X_{2}$
b) What is the correlation between $X_{1}$ and $X_{2}$. What whappens when c goes to inf?
Here's what I did:
a)
$P(0,0) = \frac{3}{4} \cdot \frac{3+c}{4+c} = \frac{9+3c}{16+4c}$
$P(0,1) = \frac{3}{4} \cdot \frac{1}{4+c} = \frac{3}{16+4c}$
$P(1,0) = \frac{1}{4} \cdot \frac{3}{4+c} = \frac{3}{16+4c}$
$P(1,1) = \frac{1}{4} \cdot \frac{1+c}{4+c} = \frac{1+c}{16+4c}$
b)
$E[X_{1}] = E[(X_{1})^2] = 1 \cdot \frac{1}{4} = \frac{1}{4}$
$E[X_{2}] = E[(X_{1})^2] = 1 \cdot \frac{1+c}{4+c} + 1 \cdot \frac{1}{4+c} = \frac{2+c}{4+c}$ (all other sum terms are 0)
$E[X_{1}X_{2}] = 1 \cdot 1 \cdot \frac{1+c}{4c+16}$ (all other sum terms are 0)
$Var(X_{1}) = \frac{1}{4} - \frac{1}{16} = \frac{3}{16}$
$Var(X_{2}) = \frac{2+c}{4+c} - (\frac{2+c}{4+c})^2 = \frac{4+2c}{(4+c)^2}$
$Cov(X_{1},X_{2}) = \frac{1+c}{4c+16} - \frac{1}{4} \cdot \frac{2+c}{4+c} = -1$
$Cor(X_{1},X_{2}) = \frac{-1}{\sqrt{\frac{3}{16} \cdot \frac{4+2c}{(4+c)^2}}} = \frac{-c-16}{\sqrt{12+6c}}$
And then when c goes to inf, the correlation goes to -inf, which doesn't seem to make sense...
Did I go wrong somewhere?
| I'm having trouble following your logic, but yes, you've made some mistakes (a correlation cannot exceed one in absolute value, for example). $\text{E}(X_1)$ is easy enough to find so let's start by calculating $\text{E}(X_2)$. The key is to condition on $X_1$ and then calculate the expectation in pieces.
\begin{align}
\text{E}(X_2) &= \text{E} [ \text{E} (X_2 \mid X_1) ] \\
&= P(X_1 = 1) \text{E} (X_2 \mid X_1 = 1) + P(X_1 = 0) \text{E}(X_2 \mid X_1 = 0) \\
&= \frac{1}{4} \cdot \frac{1 + c}{4 + c} + \frac{3}{4} \cdot \frac{1}{4 + c} \\
&= \frac{4 + c}{4 (4 + c)} \\
&= \frac{1}{4} .
\end{align}
This is interesting as it says that on average $X_2$ behaves just like $X_1$. Now since these are Bernoulli random variables with the same expectation the variances are easy:
\begin{align}
\text{Var}(X_i) &= \text{E}(X_i^2) - \text{E}(X_i)^2 \\
&= \frac{1}{4} - \frac{1}{16} \\
&= \frac{3}{16} .
\end{align}
The only thing left to calculate is the covariance and we can use the identity $\text{Cov}(X_1, X_2) = \text{E}(X_1 X_2) - \text{E}(X_1) \text{E}(X_2)$. We already know the rightmost term so for the other we have
\begin{align}
\text{E}(X_2 X_2) &= P(X_1 = 1 \cap X_2 = 1) \\
&= P(X_1 = 1) P(X_2 = 1 \mid X_1 = 1) \\
&= \frac{1 + c}{4 (4 + c)}
\end{align}
yielding
\begin{align}
\text{Cov}(X_1, X_2) &= \frac{1 + c}{4 (4 + c)} - \frac{1}{16} \\
&= \frac{3c}{16 (4 + c)} .
\end{align}
If we now divide by this $\sqrt{\text{Var}(X_1) \text{Var}(X_2)}$ we get
\begin{align}
\text{Corr}(X_1, X_2) &= \frac{c}{4 + c} .
\end{align}
This makes sense since as $c \to \infty$ we have $X_1 = X_2$ with high probability so the correlation should approach one.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/237277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove using correlation to do t-test is equivalent to the standard t-test formula in linear regression? $\newcommand{\Cor}{\operatorname{Cor}} \newcommand{\Cov}{\operatorname{Cov}}$My question is why the following expression holds?
$$t = \frac{\hat{\beta_1}}{\operatorname{se}(\beta_1)} = \frac{\Cor(Y,X)\sqrt{n-2}}{\sqrt{1 - \Cor^2(Y,X)}}$$
Here is what I got so far:
\begin{align}
\frac{\hat{\beta_1}}{\operatorname{se}(\beta_1)} &= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\operatorname{se}(\beta_1)}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\sqrt{\frac{\sum(\hat{Y_i} - Y_i)^2}{n-2}\cdot\frac{1}{\sum(X_i - \overline{X})^2}}}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}\sqrt{n-2}}{\sqrt{\frac{\sum(\hat{Y_i} - Y_i)^2}{\sum(X_i - \overline{X})^2}}}\label{a}\tag{1}
\end{align}
\begin{align}
\frac{\Cor(Y,X)\sqrt{n-2}}{\sqrt{1 - \Cor^2(Y,X)}}
&= \frac{\frac{\Cov(Y, X)}{S_xS_y}\sqrt{n-2}}{\sqrt{1 - \left(\frac{\Cov(Y, X)}{S_xS_y}\right)^2}} \label{b}\tag{2}
\end{align}
From $\ref{a}$ and $\ref{b}$, we then need to show:
\begin{align}
\frac{1}{S_x\sqrt{\frac{\sum(\hat{Y_i} - Y_i)^2}{\sum(X_i - \overline{X})^2}}} &= \frac{1}{S_y\sqrt{1 - \left(\frac{\Cov(Y, X)}{S_xS_y}\right)^2}}\\
&=\frac{1}{S_y\frac{\sqrt{S_x^2S_y^2 -\Cov^2(Y, X)}}{S_xS_y}}\\
&=\frac{1}{\frac{\sqrt{S_x^2S_y^2 -\Cov^2(Y, X)}}{S_x}}\label{c}\tag{3}
\end{align}
I tried to expand the $\Cov(Y, X)$ term and left $SSE$ and $SSX$ terms in $\ref{c}$, but there is no any further process.
I am wondering how to continue from $\ref{c}$, or my initial direction of the proof is not correct?
| The trick is the following equation:$$\sum(\hat{Y_i} - Y_i)^2 = SSE = (1 - R^2)SST$$
This is why there is a $\sqrt{1 - \Cor^2(Y,X)}$ term in $(2)$. The whole proof is below:
$$
\begin{align}
t &= \frac{\hat{\beta_1}}{\operatorname{se}(\beta_1)}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\sqrt{\frac{\sum(\hat{Y_i} - Y_i)^2}{n-2}\cdot\frac{1}{\sum(X_i - \overline{X})^2}}}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\sqrt{\frac{(1 - R^2)SST}{n-2}\cdot\frac{1}{\sum(X_i - \overline{X})^2}}}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\sqrt{\frac{(1 - R^2)S_y^2(n-1)}{n-2}\cdot\frac{1}{S_x^2(n-1)}}}\\
&= \frac{\frac{\Cov(Y,X)}{S_x^2}}{\frac{S_y}{S_x}\frac{\sqrt{1 - R^2}}{\sqrt{n - 2}}}\\
&= \frac{\frac{\Cov(Y,X)}{S_yS_x}\sqrt{n-2}}{\sqrt{1 - R^2}}\\
&= \frac{\Cor(Y,X)\sqrt{n-2}}{\sqrt{1 - \Cor^2(Y,X)}}
\end{align}
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/267023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Showing that the OLS estimator is scale equivariant? I don't have a formal definition of scale equivariance, but here's what Introduction to Statistical Learning says about this on p. 217:
The standard least squares coefficients... are scale equivariant: multiplying $X_j$ by a constant $c$ simply leads to a scaling of the least squares coefficient estimates by a factor of $1/c$.
For simplicity, let's assume the general linear model $\mathbf{y} = \mathbf{X}\boldsymbol\beta + \boldsymbol\epsilon$, where $\mathbf{y} \in \mathbb{R}^N$, $\mathbf{X}$ is a $N \times (p+1)$ matrix (where $p+1 < N$) with all entries in $\mathbb{R}$, $\boldsymbol\beta \in \mathbb{R}^{p+1}$, and $\boldsymbol\epsilon$ is a $N$-dimensional vector of real-valued random variables with $\mathbb{E}[\boldsymbol\epsilon] = \mathbf{0}_{N \times 1}$.
From OLS estimation, we know that if $\mathbf{X}$ has full (column) rank,
$$\hat{\boldsymbol\beta}_{\mathbf{X}} = (\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{y}\text{.}$$
Suppose we multiplied a column of $\mathbf{X}$, say $\mathbf{x}_k$ for some $k \in \{1, 2, \dots, p+1\}$, by a constant $c \neq 0$. This would be equivalent to the matrix
\begin{equation}
\mathbf{X}\underbrace{\begin{bmatrix}
1 & \\
& 1 \\
& & \ddots \\
& & & 1 \\
& & & & c\\
& & & & & 1 \\
& & & & & &\ddots \\
& & & & & & & 1
\end{bmatrix}}_{\mathbf{S}} =
\begin{bmatrix} \mathbf{x}_1 & \mathbf{x}_2 & \cdots & c\mathbf{x}_{k} & \cdots & \mathbf{x}_{p+1}\end{bmatrix} \equiv \tilde{\mathbf{X}}
\end{equation}
where all other entries of the matrix $\mathbf{S}$ above are $0$, and $c$ is in the $k$th entry of the diagonal of $\mathbf{S}$. Then, $\tilde{\mathbf X}$ has full (column) rank as well, and the resulting OLS estimator using $\tilde{\mathbf X}$ as the new design matrix is
$$\hat{\boldsymbol\beta}_{\tilde{\mathbf{X}}} = \left(\tilde{\mathbf{X}}^{T}\tilde{\mathbf{X}}\right)^{-1}\tilde{\mathbf{X}}^{T}\mathbf{y}\text{.}$$
After some work, one can show that
$$\tilde{\mathbf{X}}^{T}\tilde{\mathbf{X}} = \begin{bmatrix}
\mathbf{x}_1^{T}\mathbf{x}_1 & \mathbf{x}_1^{T}\mathbf{x}_2 & \cdots & c\mathbf{x}_1^{T}\mathbf{x}_k & \cdots & \mathbf{x}_1^{T}\mathbf{x}_{p+1} \\
\mathbf{x}_2^{T}\mathbf{x}_1 & \mathbf{x}_2^{T}\mathbf{x}_2 & \cdots & c\mathbf{x}_2^{T}\mathbf{x}_k & \cdots & \mathbf{x}_2^{T}\mathbf{x}_{p+1} \\
\vdots & \vdots & \ddots & \vdots & \cdots & \vdots \\
c\mathbf{x}_k^{T}\mathbf{x}_1 & c\mathbf{x}_k^{T}\mathbf{x}_2 & \cdots & c^2\mathbf{x}_k^{T}\mathbf{x}_k & \cdots & c\mathbf{x}_k^{T}\mathbf{x}_{p+1} \\
\vdots & \vdots & \vdots & \vdots & \cdots & \vdots \\
\mathbf{x}_{p+1}^{T}\mathbf{x}_1 & \mathbf{x}_{p+1}^{T}\mathbf{x}_2 & \cdots & c\mathbf{x}_{p+1}^{T}\mathbf{x}_{p+1} & \cdots & \mathbf{x}_{p+1}^{T}\mathbf{x}_{p+1} \\
\end{bmatrix}$$
and
$$\tilde{\mathbf{X}}^{T}\mathbf{y} = \begin{bmatrix}
\mathbf{x}_1^{T}\mathbf{y} \\
\mathbf{x}_2^{T}\mathbf{y} \\
\vdots \\
c\mathbf{x}_k^{T}\mathbf{y} \\
\vdots \\
\mathbf{x}_{p+1}^{T}\mathbf{y}
\end{bmatrix}$$
How do I go from here to show the claim quoted above (i.e., that $\hat{\boldsymbol\beta}_{\tilde{\mathbf{X}}} = \dfrac{1}{c}\hat{\boldsymbol\beta}_{\mathbf{X}}$)? It's not clear to me how to compute $(\tilde{\mathbf{X}}^{T}\tilde{\mathbf{X}})^{-1}$.
| I figured this out after posting the question. If my work is correct, however, I misinterpreted the claim. The $\dfrac{1}{c}$ scaling only occurs on the one component of $\boldsymbol\beta$ corresponding to the column of $\mathbf{X}$ being multiplied by $c$.
Notice that $\mathbf{S}$, in the notation above, is a diagonal, symmetric $(p+1) \times (p+1)$ matrix and has inverse (because it is diagonal)
$$\mathbf{S}^{-1} = \begin{bmatrix}
1 & \\
& 1 \\
& & \ddots \\
& & & 1 \\
& & & & \frac{1}{c}\\
& & & & & 1 \\
& & & & & &\ddots \\
& & & & & & & 1
\end{bmatrix}\text{.}$$
Note that $(\tilde{\mathbf{X}}^{T}\tilde{\mathbf{X}})^{-1}$ is a $(p+1)\times(p+1)$ matrix. Let's suppose that
$$(\mathbf{X}^{T}\mathbf{X})^{-1} = \begin{bmatrix}
\mathbf{z}_1 & \mathbf{z}_2 & \cdots & \mathbf{z}_k & \cdots & \mathbf{z}_{p+1}
\end{bmatrix}\text{.}$$
Then it follows that
$$(\tilde{\mathbf{X}}^{T}\tilde{\mathbf{X}})^{-1} = [(\mathbf{X}\mathbf{S})^{T}\mathbf{X}\mathbf{S}]^{-1} = (\mathbf{S}^{T}\mathbf{X}^{T}\mathbf{X}\mathbf{S})^{-1} = (\mathbf{S}\mathbf{X}^{T}\mathbf{X}\mathbf{S})^{-1}=\mathbf{S}^{-1}(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{S}^{-1}\text{.}$$
Hence,
$$\mathbf{S}^{-1}(\mathbf{X}^{T}\mathbf{X})^{-1} = \begin{bmatrix}
\mathbf{z}_1 \\
& \mathbf{z}_2 \\
& & \ddots \\
& & & \frac{1}{c}\mathbf{z}_k \\
& & & & \ddots \\
& & & & & \mathbf{z}_{p+1}
\end{bmatrix}$$
and multiplying this by $\mathbf{S}^{-1}$ has a similar effect to what multiplying $\mathbf{X}$ by $\mathbf{S}$ did - it remains the same, except $\frac{1}{c}\mathbf{z}_k$ is multiplied by $\frac{1}{c}$:
$$\mathbf{S}^{-1}(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{S}^{-1} = \begin{bmatrix}
\mathbf{z}_1 \\
& \mathbf{z}_2 \\
& & \ddots \\
& & & \frac{1}{c^2}\mathbf{z}_k \\
& & & & \ddots \\
& & & & & \mathbf{z}_{p+1}
\end{bmatrix}\text{.}$$
Therefore,
$$\begin{align}
\hat{\boldsymbol\beta}_{\tilde{\mathbf{X}}}&=\mathbf{S}^{-1}(\mathbf{X}^{T}\mathbf{X})^{-1}\mathbf{S}^{-1}(\mathbf{X}\mathbf{S})^{T}\mathbf{y} \\
&= \begin{bmatrix}
\mathbf{z}_1 \\
& \mathbf{z}_2 \\
& & \ddots \\
& & & \frac{1}{c^2}\mathbf{z}_k \\
& & & & \ddots \\
& & & & & \mathbf{z}_{p+1}
\end{bmatrix}\begin{bmatrix}
\mathbf{x}_1^{T}\mathbf{y} \\
\mathbf{x}_2^{T}\mathbf{y} \\
\vdots \\
c\mathbf{x}_k^{T}\mathbf{y} \\
\vdots \\
\mathbf{x}_{p+1}^{T}\mathbf{y}
\end{bmatrix} \\
&= \begin{bmatrix}
\mathbf{z}_1\mathbf{x}_1^{T}\mathbf{y} \\
\mathbf{z}_2\mathbf{x}_2^{T}\mathbf{y} \\
\vdots \\
\frac{1}{c}\mathbf{z}_k\mathbf{x}_k^{T}\mathbf{y} \\
\vdots \\
\mathbf{z}_{p+1}\mathbf{x}_{p+1}^{T}\mathbf{y}
\end{bmatrix}
\end{align}$$
as desired.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/311198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Using Chebyshev's inequality to obtain lower bounds Let $X_1$ and $X_2$ be i.i.d. continuous random variables with pdf $f(x) = 6x(1-x), 0<x<1$ and $0$, otherwise.
Using Chebyshev's inequality, find the lower bound of $P\left(|X_1 + X_2-1| \le\frac{1}{2}\right)$
What I did:
Using as Chebyshev's inequality,
$P(|X-\mu|\ge a)\le \frac{\sigma^2}{a^2}$
Where $a=\frac{1}{2}$
Finding the variance: $E[X^2] - (E[X])^2$
$ E[X] = \int_{0}^{1} x6x(1-x) dx$ $=6\left[\frac{x^ 3}{3}-\frac{x^4}{4}\right]_0^1=6\left[\frac{1}{3}-\frac{1}{4}\right]= \frac{6}{12}=\frac{1}{2}$
$ E[X^2]= \int_{0}^{1}x^{2}6x(1-x)dx=6\left[\frac{x^ 4}{4}-\frac{x^5}{5}\right]_0^1=6\left[\frac{1}{4}-\frac{1}{5}\right]=\frac{6}{20}$
Therefore, $\sigma^2=\frac{6}{20}-\left(\frac{1}{2}\right)^2=\frac{1}{20}$
Putting in Chebyshev's inequality,
$\frac{\sigma^2}{a^2} $= $\left[\frac{\frac{1}{20}}{\left(\frac{1}{2}\right)^2}\right]$=$\frac{4}{20}=\frac{1}{5} $
But what we need is $\le \frac{1}{2}$ which we get by $1-\frac{1}{5}=\frac{4}{5}$,
But the answer is $\frac{3}{5}$
Where am I going wrong?
| Comparing the given equation with Chebyshev's inequality, we get $\mu=1$
Since $X$ here has $X_1$ and $X_2$ i.i.d. continuous random variables, so both have same pdf and $E[X]= E[X_1] + E[X_2]$
The mistake I did was to not calculate both $X_1$ and $X_2$ separately.
So the calculated $E[X]$ is actually$ E[X_1]=\frac{1}{2}$ and similarly $E[X_2]=\frac{1}{2} $
$E[X]= E[X_1] + E[X_2]=\frac{1}{2}+ \frac{1}{2}=1$
And in the very same manner(recognising the same mistake everywhere)
$E[X^2]=E[X_1^2] + E[X_2^2] = \frac{6}{20} + \frac{6}{20} = \frac{6}{10}$
Therefore variance $E[X^2] - (E[X])^2 =\frac{1}{10}$
Putting in Chebyshev's inequality,
$\frac{\sigma^2}{a^2} $= $\left[\frac{\frac{1}{10}}{\left(\frac{1}{2}\right)^2}\right]$=$\frac{4}{10}=\frac{2}{5} $
So the lower bound which we get by $1-\frac{2}{5}=\frac{3}{5}$ which is the required answer.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/324862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Proving the maximum possible sample variance for bounded data There is a fascinating question here looking at the distribution of the sample mean and sample variance for the uniform distribution and other bounded distributions. This led me to wonder about the maximum possible value of the sample variance in these cases. Suppose we have data values $x_1,...,x_n$ that are known to fall within the bounds $a \leqslant x_i \leqslant b$. What is the maximum possible value of the sample variance?
Intuitively, it seems to me that the answer should be that you have half of the data points at each boundary. In the case where there is an even number of data points this gives the sample variance:
$$\begin{align}
s^2
&= \frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x}_n)^2 \\[6pt]
&= \frac{1}{n-1} \Bigg[ \frac{n}{2} \Big( a - \frac{a+b}{2} \Big)^2 + \frac{n}{2} \Big( b - \frac{a+b}{2} \Big)^2 \Bigg] \\[6pt]
&= \frac{n}{n-1} \cdot \frac{1}{2} \Bigg[ \Big( \frac{a-b}{2} \Big)^2 + \Big( \frac{b-a}{2} \Big)^2 \Bigg] \\[6pt]
&= \frac{n}{n-1} \cdot \Big( \frac{b-a}{2} \Big)^2. \\[6pt]
\end{align}$$
As $n \rightarrow \infty$ this converges to the square of the half-length between the boundaries. Is there a proof that this result is correct?
| Proving the bound: The sample variance is not affected by location shifts in the data, so we will use the values $0 \leqslant y_i \leqslant b-a$ and compute the sample variance from these values. As a preliminary observation we first note that the bound $y_i \leqslant b-a$ implies that we have $\sum y_i^2
\leqslant \sum (b-a) y_i = (b-a) n \bar{y}_n$, which gives the upper bound:
$$\begin{align}
s_n^2
&= \frac{1}{n-1} \sum_{i=1}^n (y_i - \bar{y}_n)^2 \\[6pt]
&= \frac{1}{n-1} \Bigg[ \sum_{i=1}^n y_i^2 - n \bar{y}_n^2 \Bigg] \\[6pt]
&\leqslant \frac{1}{n-1} \Bigg[ (b-a) n \bar{y}_n - n \bar{y}_n^2 \Bigg] \\[6pt]
&= \frac{n}{n-1} \cdot \bar{y}_n (b-a - \bar{y}_n). \\[6pt]
\end{align}$$
Now, using the fact that $0 \leqslant \bar{y}_n \leqslant b-a$ this upper bound is maximised when $\bar{y}_n = \tfrac{b-a}{2}$, which gives:
$$\begin{align}
s_n^2
&\leqslant \frac{n}{n-1} \cdot \bar{y}_n (b-a - \bar{y}_n) \quad \ \ \ \\[6pt]
&\leqslant \frac{n}{n-1} \cdot \Big( \frac{b-a}{2} \Big)^2. \\[6pt]
\end{align}$$
This gives an upper bound for the sample variance. Splitting the points between the two bounds $a$ and $b$ (for an even number of points) achieves this upper bound, so it is the maximum possible sample variance for bounded data.
Maximum possible sample variance with an odd number of data points: It is also possible to find the upper bound when we have an odd number of data points. One way to do this is to use the iterative updating formula for the sample variance (see e.g., O'Neill 2014). If we consider the maximising case for an even number $n-1$ of values and then take the final value to be $x_n = x$, then we have:
$$\begin{align}
\quad \quad \quad \quad \quad \quad s_n^2
&= \frac{1}{n-1} \Bigg[ (n-2) s_{n-1} + \frac{n-1}{n} \cdot (x-\bar{x}_{n-1})^2 \Bigg] \\[6pt]
&\leqslant \frac{1}{n-1} \Bigg[ (n-1) \cdot \Big( \frac{b-a}{2} \Big)^2 + \frac{n-1}{n} \cdot \Big( x-\frac{b-a}{2} \Big)^2 \Bigg] \\[6pt]
&= \Big( \frac{b-a}{2} \Big)^2 + \frac{1}{n} \cdot \Big( x-\frac{b-a}{2} \Big)^2. \\[6pt]
\end{align}$$
This quantity is maximised by taking either $x=a$ or $x=b$ (i.e., the last point is also on the boundary), which gives:
$$\begin{align}
s_n^2
&= \Big( \frac{b-a}{2} \Big)^2 + \frac{1}{n} \cdot \Big( \frac{b-a}{2} \Big)^2 \quad \quad \quad \quad \quad \ \\[6pt]
&= \Big( 1 + \frac{1}{n} \Big) \cdot \Big( \frac{b-a}{2} \Big)^2 \\[6pt]
&= \frac{n+1}{n} \cdot \Big( \frac{b-a}{2} \Big)^2 \\[6pt]
&= \frac{(n-1)(n+1)}{n^2} \cdot \frac{n}{n-1} \cdot \Big( \frac{b-a}{2} \Big)^2 \\[6pt]
&= \frac{n^2-1}{n^2} \cdot \frac{n}{n-1} \cdot \Big( \frac{b-a}{2} \Big)^2. \\[6pt]
\end{align}$$
This is a slightly lower sample variance value than when we have an even number of data points, but the two cases converge when $n \rightarrow \infty$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/511110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Geometric conditional Probability In order to start a game, each player takes turns throwing a fair six-sided dice until a $6$ is obtained. Let $X$ be the number of turns a player takes to start the game. Given that $X=3$, find the probability that the total score on all three of the dice is less than $10$.
So initially I thought this was straight forward. Total number of outcomes that three dice can sum up to less than $10$ is equal to $60$. Total amount of outcomes is $6^3=216$.
Therefore probability of the total summing to less than $10$ is $$P(\text{sum} < 10)=\frac{60}{6^3}=\frac{5}{18}$$.
However, this did not match the provided solution of $1/12$ and so I thought it had something to do with conditional probability. i.e. $P(\text{sum} < 10 | X = 3)$.
So naturally, $$P(\text{sum} < 10 | X = 3)=\frac{P(X = 3 | \text{sum} < 10) P(\text{sum} < 10)}{P(X=3)}$$
Out of the $60$ outcomes that sum less than $10$, only $6$ of those outcomes contain sixes. So that when we have to calculate the probability of taking three turns to get one $6$ we have to fail twice and succeed once and hence$$P(X = 3 | \text{sum} < 10) = \left(\frac{9}{10}\right)^2\left(\frac{1}{10}\right)=\frac{81}{1000}$$
$P(X=3)$ is simply $\left(\frac{5}{6}\right)^2\left(\frac{1}{6}\right)=\frac{25}{216}$ and so finally
$$P(\text{sum} < 10 | X = 3)=\frac{P(X = 3 | \text{sum} < 10) P(\text{sum} < 10)}{P(X=3)}=\frac{\frac{81}{1000} \frac{5}{18}}{\frac{25}{216}}=\frac{243}{1250}$$
Where did I go wrong? Alternatively, maybe the textbook is incorrect?
| For a more brute force approach, consider the following. You have three numbers, you know the last one is 6. There are $6 \cdot 6=36$ possible combinations for the first two numbers (assuming they can be any number from 1 to 6, which your problem seems to state that they cannot). Asking that the sum of these three numbers is $<10$ is equivalent to asking that the sum of the first two numbers is $<4$. Counting it all out you find there are 3 such combinations. Therefore $3/36=1/12$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/554239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Conditional vs unconditional expectation I'm having trouble understanding the calculation of conditional versus unconditional expectations in this case:
\begin{array}{c c c c}
& \quad &X=1\quad &X=-1\quad &X=2 \\
&Y=1\quad &0.25\quad &0.25\quad &0 \\
&Y=2\quad &0\quad &0\quad &0.5
\end{array}
\begin{align}
~ \\
~ \\
~ \\
E(Y)&=\sum_Y y f(y) \\
~ \\
~ \\
~ \\
E(Y|X)&=\sum_Y y f(y|x)
\end{align}
To me, both calculations are $1*0.25 + 1*0.25 + 2*0.5 = 1.5$. What am I doing wrong?
| If $p_{X,Y}(x,y)$ denotes the joint probability mass function of discrete
random variables $X$ and $Y$, then the marginal mass functions are
$$\begin{align}
p_X(x) &= \sum_y p_{X,Y}(x,y)\\
p_Y(y) &= \sum_x p_{X,Y}(x,y)
\end{align}$$
and so we have that
$$E[Y] = \sum_y y\cdot p_{Y}(y) = \sum_y y\cdot \sum_xp_{X,Y}(x,y)
= \sum_x\sum_y y\cdot p_{X,Y}(x,y).\tag{1}$$
Now, the conditional probability mass function of $Y$ given that
$X = x$ is
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_{X,Y}(x,y)}{\sum_y p_{X,Y}(x,y)}\tag{2}$$
and
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x).\tag{3}$$
The value of this expectation depends on our choice of the value $x$
taken on by $X$ and is thus a random variable; indeed, it is a function
of the random variable $X$, and this random variable is denoted
$E[Y\mid X]$. It happens to take on values
$E[Y\mid X = x_1], E[Y\mid X=x_2], \cdots $ with probabilities
$p_X(x_1), p_X(x_2), \cdots$ and so its expected value is
$$\begin{align}E\bigr[E[Y\mid X]\bigr] &= \sum_x E[Y\mid X = x]\cdot p_X(x)
&\text{note the sum is w.r.t}~x\\
&= \sum_x \left[\sum_y y\cdot p_{Y\mid X}(y \mid X=x)\right]\cdot p_X(x)
&\text{using}~ (3)\\
&= \sum_x \left[\sum_y y\cdot \frac{p_{X,Y}(x,y)}{p_X(x)}\right]\cdot p_X(x)
&\text{using}~ (2)\\
&= \sum_x \sum_y y\cdot p_{X,Y}(x,y)\\
&= E[Y] &\text{using}~(1)
\end{align}$$
In general, the number $E[Y\mid X = x]$ need not equal
the number $E[Y]$ for any $x$. But, if $X$ and $Y$ are
independent random variables and so $p_{X,Y}(x,y) = p_X(x)p_Y(y)$
for all $x$ and $y$, then
$$p_{Y\mid X}(y \mid X=x) = \frac{p_{X,Y}(x,y)}{p_X(x)}
= \frac{p_X(x)p_Y(y)}{p_X(x)} = p_Y(y)\tag{4}$$
and so $(3)$ gives
$$E[Y\mid X=x] = \sum_y y\cdot p_{Y\mid X}(y \mid X=x)
= \sum_y y\cdot p_Y(y) = E[Y]$$
for all $x$, that is, $E[Y\mid X]$ is a degenerate random
variable that equals the number $E[Y]$ with probability $1$.
In your particular example, BabakP's answer after correction
by Moderator whuber shows that $E[Y\mid X = x]$ is a random
variable that takes on values $1, 1, 2$ with probabilities
$0.25, 0.25, 0.5$ respectively and so its expectation is
$0.25\times 1 + 0.25\times 1 + 0.5\times 2 = 1.5$ while the
$Y$ itself is a random variable taking on values $1$ and $1$
with equal probability $0.5$ and so $E[Y] = 1\times 0.5 + 2\times 0.5 = 1.5$
as indeed one expects from the law of iterated expectation
$$E\left[[Y\mid X]\right] = E[Y].$$
If the joint pmf was intended
to illustrate the difference between conditional
expectation and expectation, then it was a spectacularly bad choice
because the random variable $E[Y\mid X]$ turns out to have the
same distribution as the random variable $Y$, and so the expected
values are necessarily the same. More generally, $E[Y\mid X]$ does
not have the same distribution as $Y$ but their expected values are
the same.
Consider for exampple, the joint pmf
$$\begin{array}{c c c c}
& \quad &X=1\quad &X=-1\quad &X=2 \\
&Y=1\quad &0.2\quad &0.2\quad &0.1 \\
&Y=2\quad &0.2\quad &0.1\quad &0.2
\end{array}$$
for which the conditional pmfs of $Y$ are
$$X=1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{2}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{2}\\
X=-1: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{2}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{1}{3}\\
X=2: \qquad p_{Y\mid X}(1\mid X = 1) = \frac{1}{3}, p_{Y\mid X}(2\mid X = 1) = \frac{2}{3}$$
the conditional means are
$$\begin{align}
E[Y\mid X = 1] &= 1\times \frac{1}{2} + 2 \times \frac{1}{2} = \frac{3}{2}\\
E[Y\mid X = -1] &= 1\times \frac{2}{3} + 2 \times \frac{1}{3} = \frac{4}{3}\\
E[Y\mid X = 2] &= 1\times \frac{1}{3} + 2 \times \frac{2}{3} = \frac{5}{3}
\end{align}$$
that is, $E[Y\mid X]$ is a random variable that takes on values
$\frac{3}{2}, \frac{4}{3}, \frac{5}{3}$ with probabiliities
$\frac{4}{10}, \frac{3}{10}, \frac{3}{10}$ respectively which is
not the same as the distribution of $Y$. Note also that $E[Y] = \frac{3}{2}$
happens to equal $E[Y\mid X=1]$ but not the
other two conditional expectations. While $E[Y\mid X]$ and $Y$ have
different distributions, their expected values are the same:
$$E\left[E[Y\mid X]\right] = \frac{3}{2}\times\frac{4}{10}
+\frac{4}{3}\times\frac{3}{10} + \frac{5}{3}\times \frac{3}{10}
= \frac{3}{2} = E[Y].$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/68810",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Why is it that natural log changes are percentage changes? What is about logs that makes this so? Can somebody explain how the properties of logs make it so you can do log linear regressions where the coefficients are interpreted as percentage changes?
| These posts all focus on the difference between two values as a proportion of the the first: $\frac{y-x}{x}$ or $\frac{y}{x} - 1$. They explain why
$$\frac{y}{x} - 1 \approx \ln(\frac{y}{x}) = \ln(y) - \ln(x).$$
You might be interested in the difference as a proportion of the average rather than as a proportion of the first value, $\frac{y-x}{\frac{y+x}{2}}$ rather than $\frac{y-x}{x}$. The difference as a proportion of the average is relevant when comparing methods of measurement, e.g. when doing a Bland-Altman plot with differences proportional to the mean. In this case, approximating the proportionate difference with the difference in the logarithms is even better:
$$\frac{y-x}{\frac{y+x}{2}} \approx \ln(\frac{y}{x}) .$$
Here is why:
Let $z = \frac{y}{x}$.
$$\frac{y-x}{\frac{y+x}{2}} = \frac{2(z-1)}{z+1}$$
Compare the Taylor series about $z =1$ for $\frac{2(z-1)}{z+1}$ and $\ln(z)$.
$$\frac{2(z-1)}{z+1} = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{4}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{2^{k-1}}(z-1)^k + ....$$
$$\ln(z) = (z-1) - \frac{1}{2}(z-1)^2 + \frac{1}{3}(z-1)^3 + ... + (-1)^{k+1}\frac{1}{k}(z-1)^k + ....$$
The series are the same out to the $(z-1)^2$ term. The approximation works quite well from $z=0.5$ to $z=2$ , i.e., when one of the values is up to twice as large as the other, as shown by this figure.
A value of $z=0.5$ or $z=2$ corresponds to a difference that is 2/3 of the average.
$$\frac{y-x}{\frac{y+x}{2}} = \frac{2x-x}{\frac{2x+x}{2}} = \frac{x}{\frac{3x}{2}} = \frac{2}{3}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/244199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "78",
"answer_count": 6,
"answer_id": 4
} |
Let $X_1, X_2,...,X_n \sim \textrm{Expo}(1)$ distribution with $n \geq 3$. How to compute $\mathrm{P}(X_1 + X_2 \leq rX_3)$? So let $X_1, X_2,...,X_n$, where $n \geq 3$ be a random sample from the Expo(1) distribution.
How do I set up the computation for
$\mathrm{P}(X_1 + X_2 \leq rX_3 | \sum_{i=1}^n X_i = t)$ where $r, t > 0$?
Edit: I do not know how to use the notion that $X_1 + X_2$ is independent from $\frac{X_1}{X_1 + X_2}$ to proceed with the problem.
| So I think here is a way of solving the question analytically:
$\mathrm{P}(X_1 + X_2 \leq rX_3 | \sum_{i=1}^n X_i = t)\\
\rightarrow \mathrm{P}\left(X_1 + X_2 + X_3 \leq (1 + r)X_3 | \sum_{i=1}^n X_i = t\right)\\
\rightarrow \mathrm{P}\left(\frac{X_1 + X_2 + X_3}{X_3} \leq (1 + r) | \sum_{i=1}^n X_i = t\right)\\
\rightarrow \mathrm{P}\left(\frac{X_3}{X_1 + X_2 + X_3} \geq \frac{1}{1 + r} | \sum_{i=1}^n X_i = t\right)\\$
The sum of exponentially distributed r.v.'s follows a Gamma distribution. Hence, we have that
$X_1 + X_2 \sim \textrm{Gamma}(2,1)\\
X_3 \sim \textrm{Gamma}(1,1)$
And so,
$\mathrm{P}\left(\frac{X_3}{X_1 + X_2 + X_3} \geq \frac{1}{1 + r} | \sum_{i=1}^n X_i = t\right)\\
\rightarrow \mathrm{P}\left(\frac{\textrm{Gamma}(1,1)}{\textrm{Gamma}(2,1) + \textrm{Gamma}(1,1)} \geq \frac{1}{1 + r} | \sum_{i=1}^n X_i = t\right)\\
\rightarrow \mathrm{P}\left(\textrm{Beta}(1,2) \geq \frac{1}{1 + r} | \sum_{i=1}^n X_i = t\right)$
We can drop the conditional by noting that
$\frac{X_3}{X_1 + X_2 + X_3} \perp \sum_{i=1}^n X_i$.
$U + V \perp \frac{U}{U+V}$ if $U$ and $V$ are Gamma distributed, and we let $U = X_1 + X_2$ and $V = X_3$.
To compute
$\mathrm{P}\left(\textrm{Beta}(1,2) \geq \frac{1}{1 + r}\right)$
we let a random variable $Z \sim \textrm{Beta}(1,2)$
$\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \int_{\frac{1}{1+r}}^1 \frac{1}{\textrm{Beta}(1,2)}z^{1-1}(1-z)^{2-1}\textrm{d}z\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \int_{\frac{1}{1+r}}^1 \frac{\Gamma(3)}{\Gamma(1)\Gamma(2)}z^{1-1}(1-z)^{2-1}\textrm{d}z\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \int_{\frac{1}{1+r}}^1 \frac{2!}{0!1!}z^{1-1}(1-z)^{2-1}\textrm{d}z\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \int_{\frac{1}{1+r}}^1 2z^{0}(1-z)^{1}\textrm{d}z\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \int_{\frac{1}{1+r}}^1 2-2z\textrm{d}z\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = (2z - z^2)|_{\frac{1}{1+r}}^1\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = 2 - 1 - \frac{2}{1+r} + \frac{1}{(1+r)^2}\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \frac{-1+r}{1+r} + \frac{1}{(1+r)^2}\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \frac{r^2 - 1}{(1+r)^2} + \frac{1}{(1+r)^2}\\
\rightarrow\mathrm{P}\left(Z \geq \frac{1}{1 + r}\right) = \frac{r^2}{(1+r)^2}$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/249309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Decompose Covariance by Observations Suppose I observe $n$ iid realizations of two random variables $X$ and $Y$, denoted respectively $x_i$ and $y_i$. Observations can be groupped into two subsamples, with $n_1$ and $n_2$ observations. I want to decompose the sample covariance $\widehat{\sigma_{XY}}$ by the contribution of each group of observations plus possibly a residual term. Here is what I have:
\begin{eqnarray}
\widehat{\sigma_{XY}}
& = & \frac{1}{n} \sum^n x_i y_i -n \overline{x} \overline{y} \\
& = & \frac{n_1}{n} \frac{1}{n_1} \sum^{n_1} x_i y_i + \frac{n_2}{n} \frac{1}{n_2} \sum^{n_2} x_i y_i \\
&&- n\left(\frac{n_1}{n} \frac{1}{n_1} \sum^{n_1} x_i + \frac{n_2}{n} \frac{1}{n_2} \sum^{n_2} x_i\right) \left(\frac{n_1}{n} \frac{1}{n_1} \sum^{n_1} y_i + \frac{n_2}{n} \frac{1}{n_2} \sum^{n_2} y_i\right) \\
& =& \frac{n_1}{n} \frac{1}{n_1} \left(\sum^{n_1} x_i y_i -n_1 \overline{x}_1 \overline{y}_1\right) + \frac{n_2}{n} \frac{1}{n_2} \left(\sum^{n_2} x_i y_i -n_2 \overline{x}_2 \overline{y}_2\right) - \frac{n_1n_2}{n}\left(\overline{x}_1\overline{y}_2 + \overline{x}_2 \overline{y}_1\right)\\
& = & \frac{n_1}{n} \widehat{\sigma_{XY, 1}} + \frac{n_2}{n} \widehat{\sigma_{XY, 2}} - \frac{n_1n_2}{n}\left(\overline{x}_1\overline{y}_2 + \overline{x}_2 \overline{y}_1\right)
\end{eqnarray}
That is, the overall sample covariance is a weighted average of the sample covariances within each group minus a residual term.
However, in a numerical example this decomposition does not hold as I do not obtain the overall sample covariance on the LHS. Especially the residual term seems too large.
I would appreciate if somebody can double-check whether there is a mistake in the above.
Many thanks
| Covariance of $X$ and $Y$ is $\mathrm{Cov}(X,Y) = \mathrm{E}(XY) - \mathrm{E}(X)\mathrm{E}(Y)$, so the estimate $\widehat{\sigma_{XY}} = \sigma$
$\sigma = \frac{1}{n} \left(\sum^n x_i y_i - n \overline{x} \overline{y}\right) = -\overline{x} \overline{y} + \frac{1}{n} \sum^n x_i y_i$
NB: factor $\frac{1}{n}$ does not multiply $\overline{x} \overline{y}$.
It follows that
\begin{eqnarray}
\sigma
& = & \frac{n_1}{n} \sigma_1 + \frac{n_2}{n} \sigma_2 - \frac{n_1 n_2}{n^2}\left(\overline{x}_1\overline{y}_2 + \overline{x}_2 \overline{y}_1 - \overline{x}_1 \overline{y}_1 - \overline{x}_2 \overline{y}_2
\right)
\end{eqnarray}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/299205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Conditional Probability Distribution of Multivariate Gaussian I've been working on the following question:
Let the vector $(X,Y,Z)$ be a multivariate Gaussian random variables with mean vector and covariance matrix:
$\mu = \begin{pmatrix} 0 \\ 3 \\ -1 \end{pmatrix}$ and $\Sigma = \begin{pmatrix} 16 & 8 & 0 \\ 8 & 9 & -2 \\ 0 & -2 & 1 \end{pmatrix}$
Derive the distribution of $(X,Z)$ given $Y=0$
I know (from my lecture notes) that the conditional distribution of $X_1$ given $X_2=x_2$ is Gaussian with mean: $\mu_1 + \Sigma_{12}\Sigma_{22}^{-1}(x_2-\mu_2)$
and variance-covariance matrix: $\Sigma_{11}-\Sigma_{12}\Sigma_{22}^{-1}\Sigma_{21}$
So I rewrote $\Sigma$ as $\Sigma = \begin{pmatrix} 16 & 0 & 8 \\ 0 & 1 & -2 \\ 8 & -2 & 9 \end{pmatrix}$
And then I let $X_1=\begin{pmatrix} X \\ Z \end{pmatrix}$ and $Y_2=(Y)$. So $E(X_1)=\begin{pmatrix} 0 \\-1 \end{pmatrix}$ and $E(X_2)=(3)$
And $Cov(\begin{pmatrix} Y_1 \\ Y_2 \end{pmatrix}) = \begin{pmatrix} \Sigma_{11} & \Sigma_{12} \\ \Sigma_{21} & \Sigma_{22} \end{pmatrix}$ where $\Sigma_{11}=\begin{pmatrix} 16 & 0 \\ 0 & 1 \end{pmatrix}, \Sigma_{12}=\begin{pmatrix} 8 \\ -2 \end{pmatrix} = \Sigma_{21}^T, \Sigma_{22}=(9)$
Meaning that $E(Y_1 \mid Y_2 =0) = \begin{pmatrix} 0 \\ -1 \end{pmatrix} + \begin{pmatrix} 8 \\ -2 \end{pmatrix}(\frac{1}{9})(0-3) = \begin{pmatrix} \frac{-8}{3} \\ \frac{1}{3}\end{pmatrix}$
and $Cov(Y_1 \mid Y_2=0) = \begin{pmatrix} 16 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix} 8 \\ -2 \end{pmatrix}(\frac{1}{9})\begin{pmatrix} 8 & -2 \end{pmatrix} = \begin{pmatrix} 16 & 0 \\ 0 & 1 \end{pmatrix} - \begin{pmatrix} \frac{64}{9} & -\frac{16}{9} \\ -\frac{16}{9} & \frac{4}{9} \end{pmatrix} = \frac{1}{9}\begin{pmatrix} 80 & 16 \\ 16 & 5 \end{pmatrix}$
Am I correct in this approach? Is there something I'm fundamentally missing here? The subsequent question is "What is the distribution of $(2X-Z,3Y+Z)$?" How might I approach this one?
Thanks for any help/suggestions/feedback.
| You have the correct formulas, but I leave it to you to check whether you've applied them correctly.
As for the distribution of $(2X−Z,3Y+Z)$, viewed as a 2 element column vector.
Consider$(X.Y,Z)$ as a 3 element column vector. You need to determine the matrix $A$ such that $A*(X,Y,Z) = (2X−Z,3Y+Z)$ . Hint: what dimensions must $A$ have to transform a 3 by 1 vector into a 2 by 1 vector? Then use the result $\text{Cov} (A*(X,Y,Z)) = A* \text{Cov}(X,Y,Z)*A^T$ combined with the trivial calculation of the mean, and your knowledge of the type of distribution which a linear transformation of a Multivariate Gaussian has.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/345784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Variance of an Unbiased Estimator for $\sigma^2$ Let $X_1, X_2,...X_n\sim N(0,\sigma^2)$ independently. Define $$Q=\frac{1}{2(n-1)}\sum_{i=1}^{n-1}(X_{i+1}-X_i)^2$$
I already proved that this Q is an unbiased estimator of $\sigma^2$. Now I'm stuck with calculating its variance, I've tried using Chi-square but then I realized these $(X_{i+1}-X_i)$ are not independent. Can you guys help me with this? Many thanks in advance.
| The easiest way to do this problem is by using vector algebra, re-expressing the estimator as a quadratic form in vector notation:
$$Q = \frac{1}{2(n-1)} \mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X}
\quad \quad \mathbf{\Delta} \equiv
\begin{bmatrix}
1 & -1 & 0 & 0 & \cdots & 0 & 0 & 0 & 0 \\
-1 & 2 & -1 & 0 & \cdots & 0 & 0 & 0 & 0 \\
0 & -1 & 2 & -1 & \cdots & 0 & 0 & 0 & 0 \\
0 & 0 & -1 & 2 & \cdots & 0 & 0 & 0 & 0 \\
\vdots & \vdots & \vdots & \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & 0 & \cdots & 2 & -1 & 0 & 0 \\
0 & 0 & 0 & 0 & \cdots & -1 & 2 & -1 & 0 \\
0 & 0 & 0 & 0 & \cdots & 0 & -1 & 2 & -1 \\
0 & 0 & 0 & 0 & \cdots & 0 & 0 & -1 & 1 \\
\end{bmatrix}.$$
Since $\mathbf{X} \sim \text{N}(\boldsymbol{0},\sigma^2 \boldsymbol{I})$ we can compute the expected value of the quadratic form to confirm that the estimator is unbiased:
$$\begin{equation} \begin{aligned}
\mathbb{E}(Q)
&= \frac{1}{2(n-1)} \cdot \mathbb{E}(\mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X}) \\[6pt]
&= \frac{\sigma^2}{2(n-1)} \cdot \text{tr}(\mathbf{\Delta} \boldsymbol{I}) \\[6pt]
&= \frac{\sigma^2}{2(n-1)} \cdot (1 + 2 + 2 + \cdots + 2 + 2 + 1) \\[6pt]
&= \frac{\sigma^2}{2(n-1)} \cdot (2n-2) \\[6pt]
&= \frac{2(n-1)}{2(n-1)} \cdot \sigma^2 = \sigma^2. \\[6pt]
\end{aligned} \end{equation}$$
This confirms that the estimator is unbiased. Now, to get the variance of the estimator we can compute the variance of a quadratic form for the case of a joint normal random vector, which gives:
$$\begin{equation} \begin{aligned}
\mathbb{V}(Q)
&= \frac{1}{4(n-1)^2} \cdot \mathbb{V}(\mathbf{X}^\text{T} \mathbf{\Delta} \mathbf{X}) \\[6pt]
&= \frac{\sigma^4}{2(n-1)^2} \cdot \text{tr}(\mathbf{\Delta} \boldsymbol{I} \mathbf{\Delta} \boldsymbol{I}) \\[6pt]
&= \frac{\sigma^4}{2(n-1)^2} \cdot \text{tr}(\mathbf{\Delta}^2) \\[6pt]
&= \frac{\sigma^4}{2(n-1)^2} \cdot (2 + 6 + 6 + \cdots + 6 + 6 + 2) \\[6pt]
&= \frac{\sigma^4}{2(n-1)^2} \cdot (6n - 8) \\[6pt]
&= \frac{3n-4}{(n-1)^2} \cdot \sigma^4 \\[6pt]
&= \frac{3n-4}{2n-2} \cdot \frac{2}{n-1} \cdot \sigma^4. \\[6pt]
\end{aligned} \end{equation}$$
This gives us an expression for the variance of the estimator. (I have framed it in this form to compare it to an alternative estimator below.) As $n \rightarrow \infty$ we have $\mathbb{V}(Q) \rightarrow 0$ so it is a consistent estimator. It is worth contrasting the variance of this estimator with the variance of the sample variance estimator (see e.g., O'Neill 2014, Result 3), which is:
$$\mathbb{V}(S^2) = \frac{2}{n-1} \cdot \sigma^4.$$
Comparing these results we see that the estimators have the same variance when $n=2$, and when $n>2$ the sample variance estimator has lower variance than the present estimator. In other words, the sample variance is a more efficient estimator than the present estimator.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/390822",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Determine if the following Markov chain is positive recurrent, null recurrent or transcient We consider the Markov Chain with transition probabilities
$$
p(i,0)=\frac{1}{i^2 +2},\qquad p(i,i+1)= \frac{i^2 +1}{i^2 +2}.
$$
Determine if this Markov chain is positive recurrent, null recurrent or transcient.
My attempt: Since all states are connected to $0$, then it is sufficient to determine if $0$ is a positive recurring state.
Consider $T_{0}$ the hitting time, that is
$$T_{0}=\inf\left\{m\geq 1\: :\: X_{m}=0\right\}.$$
Note that
$$
\mathbb{P}(T_{0}=n|X_{0}=0)=\left(\frac{1}{2}\times\frac{2}{3}\times\cdots\times\frac{(n-2)^2+1}{(n-2)^{2}+2}\right)\left(\frac{1}{(n-1)^{2}+2}\right)
$$
Therefore, we have
$$
\mathbb{E}(T_{0}|X_{0}=0)=\sum_{n=1}^{\infty}n\times \left(\frac{1}{2}\times\frac{2}{3}\times\cdots\times\frac{(n-2)^2+1}{(n-2)^{2}+2}\right)\left(\frac{1}{(n-1)^{2}+2}\right).
$$
I need to determine if this series converges or diverges. I have tried to limit it superiorly and inferiorly but I have not found good bounds.
| Handle the product in the summation by taking logs:
$$\eqalign{
\log \prod_{i=0}^{n-2} \frac{i^2+1}{i^2+2} &= \sum_{i=0}^n \log\left(1 - \frac{1}{i^2+2}\right)\\
&\ge -\sum_{i=0}^n \frac{1}{i^2+2} \\
&\gt -\sum_{i=0}^\infty \frac{1}{i^2+2} \\
&\gt -\frac{1}{2} - \sum_{i=1}^\infty \frac{1}{i^2} \\
&= -\frac{3+\pi^2}{6}.
}$$
Consequently you can underestimate the sum as
$$\eqalign {
\sum_{n=1}^\infty n \prod_{i=0}^{n-2} \frac{i^2+1}{i^2+2} \frac{1}{(n-1)^2+2} & \gt \frac{1}{e^{(3+\pi^2)/6}} \sum_{n=1}^\infty \frac{n}{(n-1)^2+2}.
}$$
The right hand side diverges (compare it to $\int_1^\infty \frac{x}{x^2+2}\mathrm{d}x$).
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/439231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Prove variance of mean estimator of an AR(1) model I need to solve the proof as given in the screenshot below. I had tried to do something (under My solution) but I don't know how to proceed with the proof. I'm not sure whether I did something wrong or mistake. Please help me to continue with the proof. Any help will be greatly appreciated! Thanks!
| Compute directly (not the best homework question, if that's what this is):
\begin{align*}
Var(\bar{Y}) &= Var(\frac{1}{n} \sum_{t=1}^n Y_t) \\
&= \frac{1}{n^2} Var( \sum_{t=1}^n Y_t) \\
&= \frac{1}{n} \gamma(0) + 2 \frac{1}{n} \sum_{h = 1}^{n-h} \frac{n-h}{n} \gamma(h),
\end{align*}
where $\gamma(h) = \frac{\beta^h}{1-\beta^2}$ is the autocovariance function at lag $h$.
Substituting $\gamma(0) = \frac{1}{1-\beta^2}$ gives the first term in your sum
$$
\frac{1}{n} \gamma(0) = \frac{1}{n} \frac{1}{1-\beta^2}.
$$
Similarly,
\begin{align*}
2 \frac{1}{n} \sum_{h = 1}^{n-h} \frac{n-h}{n} \gamma(h) &= \frac{2}{n^2(1-\beta^2)} \sum_{h=1}^{n-h} (n-h) \beta^h \\
&= \frac{2}{n^2(1-\beta^2)} \sum_{h=1}^{n-h} \sum_{j=1}^{n-h} \beta^j \\
&= \frac{2}{n^2(1-\beta^2)} \sum_{h=1}^{n-h} ( \frac{1-\beta^{n+1-h}}{1-\beta}) \\
&= \frac{2}{n^2(1-\beta^2)(1-\beta)} \sum_{h=1}^{n-h} ( 1-\beta^{n+1-h} )\\
&= \cdots.
\end{align*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/453118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Largest singular values Given the positive semi-definite, symmetric matrix $A = bb^T + \sigma^2I$ where b is a column vector is it possible to find the singular values and singular vectors of the matrix analytically? I know that it has real eigenvalues since it's symmetric and positive semidefinite but not sure about solving directly for those values and their corresponding vectors.
| The singular values are the eigenvalues of $A.$ By definition, when there exists a nonzero vector $\mathbf x$ for which $A\mathbf{x}=\lambda \mathbf{x},$ $\lambda$ is an eigenvalue and $\mathbf{x}$ is a corresponding eigenvector.
Note, then, that
$$A\mathbf{b} = (\mathbf{b}\mathbf{b}^\prime + \sigma^2I)\mathbf{b} = \mathbf{b}(\mathbf{b}^\prime \mathbf{b}) + \sigma^2 \mathbf{b} = (|\mathbf{b}|^2+\sigma^2)\mathbf{b},$$
demonstrating that $\mathbf{b}$ is an eigenvector with eigenvalue $\lambda_1 = |\mathbf{b}|^2 + \sigma^2.$
Furthermore, whenever $\mathbf{x}$ is a vector orthogonal to $\mathbf{b}$ -- that is, when $\mathbf{b}^\prime \mathbf{x} = \pmatrix{0},$ we may similarly compute
$$A\mathbf{x} = (\mathbf{b}\mathbf{b}^\prime + \sigma^2I)\mathbf{x} = \mathbf{b}(\mathbf{b}^\prime \mathbf{x}) + \sigma^2 \mathbf{x} = (0+\sigma^2)\mathbf{x},$$
showing that all such vectors are eigenvectors with eigenvalue $\sigma^2.$
Provided these vectors are in a finite dimensional vector space of dimension $n$ (say), a straightforward induction establishes that the vectors $x$ for which $\mathbf{b}^\prime \mathbf{x}=0$ form a subspace $\mathbf{b}^\perp$ of dimension $n-1.$ Let $\mathbf{e}_2, \ldots, \mathbf{e}_n$ be an orthonormal basis for this subspace. It extends to an orthonormal basis $\mathscr{E} = (\mathbf{\hat b}, \mathbf{e}_2, \ldots, \mathbf{e}_n)$ of the whole space where $\mathbf{\hat b} = \mathbf{b}/|\mathbf{b}|$. In terms of this basis the matrix of $A$ therefore is
$$\operatorname{Mat}(A, \mathscr{E}, \mathscr{E}) = \pmatrix{|\mathbf{b}|^2+\sigma^2 & 0 & 0 & \cdots & 0 \\
0 & \sigma^2 & 0 & \cdots & 0 \\
0 & 0 & \ddots & \vdots & \vdots \\
\vdots & \vdots & \cdots & \ddots & 0 \\
0 & 0 & \cdots & 0 & \sigma^2
}$$
Whether or not every step of this derivation was clear, you can verify the result by setting
$$Q = \left(\mathbf{b}; \mathbf{e}_2; \ldots; \mathbf{e}_n\right)$$
to be the matrix with the the given columns and computing
$$Q\,\operatorname{Mat}(A, \mathscr{E}, \mathscr{E})\,Q^\prime = \mathbf{b}^\prime + \sigma^2I = A.$$
This is explicitly a singular value decomposition of the form $U\Sigma V^\prime$ where $V=Q,$ $\Sigma= \operatorname{Mat}(A, \mathscr{E}, \mathscr{E}),$ and $U=Q^\prime.$
The Gram Schmidt process provides a general algorithm to find $\mathscr{E}$ (and therefore $Q$): its input is the series of vectors $\mathbf{\hat b}$, $(1,0,\ldots,0)^\prime,$ and so on through $(0,\ldots,0,1)^\prime.$ After $n-1$ steps it will produce an orthonormal basis including the starting vector $\mathbf b.$
As an example, let $\mathbf{b} = (3,4,0)^\prime.$ With $\sigma^2 = 1,$ compute
$$\mathbf{b}\mathbf{b}^\prime + \sigma^2 I = \pmatrix{10&12&0\\12&17&0\\0&0&1}$$
Here, $|\mathbf{b}|^2 = 3^2+4^2+0^2=5^2,$ so that $\mathbf{\hat b} = \mathbf{b}/5 = (3/5,4/5,0)^\prime.$ One way to extend this to an orthonormal basis is to pick $\mathbf{e}_2 = (-4/5,3/5,0)^\prime$ and $\mathbf{e}_3 = (0,0,1)^\prime.$ Thus
$$Q = \pmatrix{3/5&4/5&0\\-4/5&3/5&0\\0&0&1}$$
and we may confirm that
$$\begin{align}
Q\,\operatorname{Mat}(A, \mathscr{E}, \mathscr{E})\,Q^\prime &= \pmatrix{3/5&4/5&0\\-4/5&3/5&0\\0&0&1}\pmatrix{5^2+1^2&0&0\\0&1&0\\0&0&1}\pmatrix{3/5&-4/5&0\\4/5&3/5&0\\0&0&1}\\
&=\pmatrix{10&12&0\\12&17&0\\0&0&1}
\end{align}$$
as intended.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/494628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability distribution of the distance of a point in a square to a fixed point The question is, given a fixed point with coordinates X,Y, in a square of size N. What is the probability distribution of the distance to a random point. More specifically, what is the probability distribution of the square root of the square sum of 2 randomly independent uniform variables (0,N).
| To find the distribution of the distance between the origin, $(0,0)$ and a random point on the unit square $(0,1)^2$ we can integrate over the area that is within $d$ of the origin. That is, find the cdf $P(\sqrt{X^2 + Y^2} \leq d)$, then take the derivative to find the pdf. Extensions to the square $(0,N)$ are immediate.
Case 1: $ 0 \leq d \leq 1$
$$
F_{D}(d) = P(\sqrt{X^2 + Y^2} \leq d) = P(X^2 + Y^2 \leq d^2) \\
= \int_{0}^{d} \int_{0}^{\sqrt{d^2-x^2}}1dydx \\
= \int_{0}^{d} \sqrt{d^2 - x^2}dx\\
= \frac{d^2\pi}{4}
$$
Case 2: $1 \leq d \leq \sqrt{2}$
$$
F_{D}(d) = \int_{0}^{\sqrt{d^2 - 1}} 1 dx + \int_{\sqrt{d^2 - 1}}^{1}\int_{0}^{\sqrt{d^2-x^2}}1dydx \\
=\sqrt{d^2-1} + \int_{\sqrt{d^2-1}}^{1}\sqrt{d^2-x^2}dx\\
= \sqrt{d^2 -1} + \frac{1}{2}\left\{t\sqrt{d^2-t^2}+d^2\tan^{-1}\left(\frac{t}{\sqrt{d^2-t^2}}\right) \right\}|_{\sqrt{d^2-1}}^{1} \\
= \sqrt{d^2-1} + \frac{1}{2}\left\{\sqrt{d^2-1}+d^2\tan^{-1}\left(\frac{1}{\sqrt{d^2-1}}\right) - \sqrt{d^2-1}\sqrt{1}-d^2\tan^{-1}\left(\frac{\sqrt{d^2-1}}{1}\right)\right\} \\
=\sqrt{d^2-1} + \frac{d^2}{2}\left\{ \tan^{-1}\left(\frac{1}{\sqrt{d^2-1}}\right)-\tan^{-1}\left(\sqrt{d^2-1}\right)\right\}
$$
Taking the derivative gives the density
$$
f_{D}(d) = \frac{d\pi}{2}, 0 \leq d \leq 1\\
f_{D}(d) = d\left\{\tan^{-1}\left(\frac{1}{\sqrt{d^2-1}}\right)-\tan^{-1}(\sqrt{d^2-1})\right\}, 1 \leq d \leq \sqrt{2} \\
$$
Comparing the result with @BruceET's simulation answer on Expected average distance from a random point in a square to a corner, we find it matches exactly.
den <- function(d) {
if(d < 0) {
return(0)
}
if(d < 1) {
return(d*pi/2)
}
if(d < sqrt(2)) {
return(d*(atan(1/sqrt(d^2-1)) - atan(sqrt(d^2-1))))
}
if(d > sqrt(2)) {
return(0)
}
stop()
}
ys <- xs <- seq(from=0,t=1.42,by=0.01)
for(i in seq_along(xs)){
ys[i] <- den(xs[i])
}
set.seed(2021)
x = runif(10^6); y = runif(10^6)
d = sqrt(x^2 + y^2)
hist(d, prob=T, br=seq(0,1.42,0.02),xlim=c(0,1.5),ylim=c(0,1.5))
lines(xs,ys,col="red",lwd=2)
Created on 2021-06-15 by the reprex package (v2.0.0)
By symmetry this is equal to the distance between random points and $(0,1)$, $(1,0)$, and $(1,1)$. Finding the distance to a point on the boundary or the interior of the square will require considering more cases in the cumulative probability function.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/530697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Compute Exact Null Distribution for Friedman Statistic Problem Statement: If there are no ties and $b=2,\;k=3,$ derive the exact null distribution of $F_r,$ the Friedman statistic.
Note: This is Exercise 15.35 in Mathematical Statistics with Applications, 5th Ed., by Wackerly, Mendenhall, and Scheaffer.
My Work So Far: There are two equivalent formulae for the Friedman statistic, and I will use this one:
$$F_r=\frac{12}{bk(k+1)}\,\sum_{i=1}^kR_i^2-3b(k+1).$$
For our situation, this simplifies down to
$$F_r=\frac{1}{2}\sum_{i=1}^3R_i^2-24.$$
I wrote the following (quick and dirty) function in R to compute this statistic:
compute_Friedman_Fr = function(y)
{
0.5*(sum(y[1:2])^2+sum(y[3:4])^2+sum(y[5:6])^2)-24
}
where I am considering the first two elements of the list $y$ as the first treatment, the second two elements as the second treatment, and so on. The Friedman statistic is invariant to two kinds of permutations: permuting the treatments, and permuting the ranks within treatments. An example of a function call would be:
> compute_Friedman_Fr(c(1,2,3,4,5,6))
[1] 65.5
Hence, we can construct the following table:
$$
\begin{array}{cc}
\textbf{Rank Array} &F_r \\ \hline
(1,2,3,4,5,6) &65.5\\
(1,2,3,5,4,6) &62.5\\
(1,2,3,6,4,5) &61.5\\
(1,3,2,4,5,6) &62.5\\
(1,3,2,5,4,6) &58.5\\
(1,3,2,6,4,5) &56.5\\
(1,4,2,3,5,6) &61.5\\
(1,4,2,5,3,6) &53.5\\
(1,4,2,6,3,5) &52.5\\
(1,5,2,3,4,6) &56.5\\
(1,5,2,4,3,6) &52.5\\
(1,5,2,6,3,4) &50.5\\
(1,6,2,3,4,5) &53.5\\
(1,6,2,4,3,5) &50.5\\
(1,6,2,5,3,4) &49.5
\end{array}
$$
My Question: This bears absolutely no resemblance to the book's answer of
\begin{align*}
P(F_r=4)&=P(F_r=0)=1/6\\
P(F_r=3)&=P(F_r=1)=1/3
\end{align*}
I feel like there is a category error somewhere. How could $F_r,$ with the formula above, possibly ever equal $0,1,3,$ or $4?$ Am I wrong? If so, why? Is the book wrong? If so, why?
| So, if I understand @whuber's comment correctly, I am computing the statistic incorrectly. I should be doing this, instead (as one example):
$$
\begin{array}{c|ccc}
B/T &1 &2 &3 \\ \hline
A &1 &2 &3 \\
B &1 &2 &3 \\ \hline
\text{Total} \;(R_i) &2 &4 &6
\end{array},
$$
so that
$$F_r=\frac12(4+16+36)-24=4.$$
Another example:
$$
\begin{array}{c|ccc}
B/T &1 &2 &3 \\ \hline
A &1 &3 &2 \\
B &1 &2 &3 \\ \hline
\text{Total} \;(R_i) &2 &5 &5
\end{array},
$$
so that
$$F_r=\frac12(4+25+25)-24=3.$$
Many thanks, whuber, as always!
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/559215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find probability from uniform distribution Let $X$, $Y$ be two independent random variables from $U(0,1)$. Then find $P[Y>(X-1/2)^2]$.
I initially tried drawing the figure but that seemed complicated. I then took expectation on both sides and got $P[E(Y)>V(X)]$. Am I right?
| This ends up being the area above the curve
\begin{equation}
Y=(X−\frac{1}{2})^2
\end{equation}
This can be found by integration
\begin{equation}
P[Y>(X−\frac{1}{2})^2] = \int_{0}^{1}\int_{(X−\frac{1}{2})^2}^{1} 1\times1 \,dydx
\end{equation}
\begin{equation}
= \int_{0}^{1}{1-(X−\frac{1}{2})^2} \,dx
\end{equation}
\begin{equation}
= \Big[X-\frac{1}{3}\times(X−\frac{1}{2})^3\Big]_0^1
\end{equation}
\begin{equation}
= \Big(1-\frac{1}{3}\times(1−\frac{1}{2})^3\Big) - \Big(0-\frac{1}{3}\times(0−\frac{1}{2})^3\Big)
\end{equation}
\begin{equation}
= \frac{23}{24}-\frac{1}{24}
\end{equation}
\begin{equation}
= \frac{22}{24}
\end{equation}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/275108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding vectors that direct the discriminatory factorial axis
Let be a set $E$ of $100$ individuals for who the quantitative variables $x_1$ and $x_2$ have been observed. This set is partitioned into $C_1$ and $C_2$ which contains $40$ and $60$ individuals respectively. Each individuals have a weight of $1/100$. We write $g_1$ and $g_2$ the gravity center of $C_1$ and $C_2$ and $V$ the total variance matrix, that is to say the matrix of variable $x_1$ and $x_2$ which have been observed for each of the $100$ individuals. We assume that :
$$g_1=\begin{pmatrix}6\\1
\end{pmatrix}, g_2=\begin{pmatrix}1\\-4
\end{pmatrix}$$
I'm looking for the vector that direct the only discriminatory factorial axis of the following matrix :
$$V=
\begin{pmatrix}
5 & 0\\
0 & 2
\end{pmatrix}$$
I have the following eigenvectors and values :
*
*$\begin{pmatrix}1\\0\end{pmatrix}$ for $5$
*$\begin{pmatrix}0\\1\end{pmatrix}$ for $2$
Normalizing the vector I get :
*
*$v_1=\frac{1}{\sqrt{5}}\begin{pmatrix}1\\0\end{pmatrix}$
*$v_2=\frac{1}{\sqrt{2}}\begin{pmatrix}0\\1\end{pmatrix}$
| Here is the formula to apply :
\begin{align}
\frac{g_2-g_1}{||g_2-g_1||}&=
\frac{\begin{pmatrix}1\\-4\end{pmatrix}-\begin{pmatrix}6\\1\end{pmatrix}}{||\begin{pmatrix}1\\-4\end{pmatrix}-\begin{pmatrix}6\\1\end{pmatrix}||}\\
&=\frac{-5\begin{pmatrix}1\\1\end{pmatrix}}{||-5\begin{pmatrix}1\\1\end{pmatrix}||}\\
&=-\frac{1}{2}\begin{pmatrix}1\\1\end{pmatrix}
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/277328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let A and B be independent. Show that $A^c$ and $B^c$ c are independent - better solution? My solution seems too long winded. This should take less writing, no? Here it is.
$B^c$ means complement of $B$ (i.e. $P(B) = 1 - P(B^c)$
If $A$ and $B$ are independent events:
$$
P(A \cap B) = P(A)P(B)
$$
So I do:
$$
P(A^c \cap B^c) = P(A^c | B^c)P(B^c) = (1 - P(A|B^c))P(B^c) = (1 - \frac{P(A\cap B^c)}{P(B^c)})P(B^c) = (1 - \frac{P(B^c|A)P(A)}{P(B^c)})P(B^c)
$$
Now, I can use the result that since $A$ and $B$ are independent $P(B^c|A) = 1 - P(B|A) = 1 - P(B) = P(B^c)$. Then:
$$
(1 - \frac{P(B^c|A)P(A)}{P(B^c)})P(B^c) = (1 - \frac{P(B^c)P(A)}{P(B^c)})P(B^c) = (1 - P(A))P(B^c) = P(A^c)P(B^c)
$$
Therefore, since $P((A^c \cap B^c) = P(A^c)P(B^c)$ then $A^c$ and $B^c$ are independent.
| Here is another way:
$$\begin{align} P(A^c\cap B^c) &=1-P(A\cup B)=1-P(A)-P(B)+P(A\cap B)\\ &= 1-P(A)-P(B)+P(A)P(B) \\ &=1-P(A)-P(B)(1-P(A))=(1-P(A))(1-P(B)) \\ &= P(A^c)P(B^c)\end{align}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/394726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$X_{1},X_{2},X_{3}\overset{i.i.d.}{\sim}N(0,1)$, find m.g.f. of $Y=X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3}$ I tried this
$X_{1}X_{2}+X_{1}X_{3}+X_{2}X_{3}=X_{1}(X_{2}+X_{3})+\frac{1}{4}(X_{2}+X_{3})^{2}-\frac{1}{4}(X_{2}-X_{3})^{2}$
$U=X_{2}+X_{3}\sim N(0,2)$
$\psi_{X_{1}(X_{2}+X_{3})}(t)=\psi_{X_{1}U}(t)=\frac{1}{\sqrt{2}\cdot 2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{x_{1}ut}e^{-\frac{1}{2}(x_{1}^{2}+\frac{u^{2}}{2})}\, dx_{1}\, du$
$=\frac{1}{\sqrt{2}\cdot 2\pi}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}e^{-\frac{1}{4}(2(x_{1}-ut)^{2}-2u^{2}t^{2}+u^{2})}\, dx_{1}\, du$
$V=(x_{1}-ut)\qquad dv=dx_{1}\\
=\frac{1}{\sqrt{2}\cdot 2\pi}\int_{-\infty}^{\infty}e^{-\frac{1}{4}(1-2t^{2})u^{2}}\int_{-\infty}^{\infty}e^{-\frac{1}{2}v^{2}}\, dv\, du\\$
$=\frac{1}{\sqrt{\pi}\cdot 2}\int_{-\infty}^{\infty}e^{-\frac{1}{4}(1-2t^{2})u^{2}}\, du$
$ w=\sqrt{\frac{1-2t^{2}}{2}}u,\qquad \sqrt{\frac{2}{1-2t^{2}}}\, dw=du\\
\psi_{XU}(t)=\frac{1}{\sqrt{1-2t^{2}}}\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}w^{2}}\, dw=\frac{1}{\sqrt{1-2t^{2}}}$
$Z=X_{2}-X_{3}\sim N(0,2)$
$\psi_{\frac{1}{4}(X_{1}+X_{2})^{2}}(t)=\frac{1}{\sqrt{4\pi}}\int_{-\infty}^{\infty}e^{\frac{1}{4}z^{2}t}e^{-\frac{1}{4}z^{2}}\, dz$
$=\frac{1}{\sqrt{4\pi}}\int_{-\infty}^{\infty}e^{-\frac{1}{4}(z^{2}-z^{2}t)}\, dz=\frac{1}{\sqrt{4\pi}}\int_{-\infty}^{\infty}e^{-\frac{1}{4}(1-t)z^{2}}\, dz\\$
$v=\sqrt{1-t}z,\quad \frac{1}{\sqrt{1-t}}\, dv=dz\\
\psi_{\frac{1}{4}(X_{1}+X_{2})^{2}}(t)=\frac{1}{\sqrt{1-t}}\int_{-\infty}^{\infty}\frac{1}{\sqrt{4\pi}}e^{-\frac{1}{4}z^{2}}\,dz=\frac{1}{\sqrt{1-t}}$
$\psi_{\frac{1}{4}(X_{1}+X_{2})^{2}-\frac{1}{4}(X_{2}-X_{3})^{2}}(t)=\frac{1}{\sqrt{1-t}}\cdot \frac{1}{\sqrt{1-(-t)}}=\frac{1}{\sqrt{(1-t)(1+t)}}=\frac{1}{\sqrt{1-t^{2}}}$
$\psi_{X_{1}(X_{2}+X_{3})+\frac{1}{4}(X_{2}+X_{3})^{2}-\frac{1}{4}(X_{2}-X_{3})^{2}}(t)=\frac{1}{\sqrt{1-2t^{2}}}\cdot \frac{1}{\sqrt{1-t^{2}}}=\frac{1}{\sqrt{1-2t^{2}}\sqrt{1-t^{2}}}$
But I have learned that this is incorrect. What mistakes did I make?
| In your first line you write the expression as the sum of three quadratic forms. You then multiply the three mgfs. But, this is only possible if the variables (quadratic forms) are independent. Quadratic forms are independent if and only if the product of their matrices is the zero matrix. This is not true of the first and second quadratic forms. However, it is true of the first and third and it is true of the second and third. That is why you could multiply two of the mgfs and get a correct result for those.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/478495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |