Q
stringlengths 70
13.7k
| A
stringlengths 28
13.2k
| meta
dict |
---|---|---|
Conditional Multivariate Gaussian Identity I'm trying to verify the form of a multivariate Gaussian provided in a paper I'm reading. It should be pretty elementary.
Let $Y=X+\varepsilon$ where $X\sim N(0,C)$ and $\varepsilon\sim N(0,\sigma^2\mathbf{I})$. The authors then claim that
$$
X|Y,C,\sigma^2 \sim N(\mu,\Sigma),
$$
where
$$
\mu := C(C+\sigma^2\mathbf I)^{-1}Y\\
\Sigma:=\sigma^2C(C+\sigma^2\mathbf I)^{-1}.
$$
My first thought was to consider the joint distribution
$$
\begin{pmatrix}
X\\
Y
\end{pmatrix}\sim N\Big(\begin{pmatrix}
0\\
0
\end{pmatrix},\begin{pmatrix}
C & C\\
C^\top & \sigma^2\mathbf I+C
\end{pmatrix}\Big)
$$
and apply the conditional Gaussian identities. Unfortunately this approach gives me the right $\mu$, but I can't see how their form of $\Sigma$ comes about. Any thoughts?
| This is a correct representation of the conditional variance.
Since
$$\begin{pmatrix}
X\\
\epsilon
\end{pmatrix}\sim N\Big(\begin{pmatrix}
0\\
0
\end{pmatrix},\begin{pmatrix}
C & \mathbf O\\
\mathbf O & \sigma^2\mathbf I
\end{pmatrix}\Big)$$
and
$$\begin{pmatrix}
X\\
Y
\end{pmatrix} = \begin{pmatrix}
\mathbf 1^\text{T} &\mathbf 0^\text{T} \\
\mathbf 1^\text{T} &\mathbf 1^\text{T}
\end{pmatrix}\begin{pmatrix}
X\\
\epsilon
\end{pmatrix}$$
the distribution of $\begin{pmatrix}
X\\
Y
\end{pmatrix}$ is
$$\begin{pmatrix}
X\\
Y
\end{pmatrix}\sim N\Big(\begin{pmatrix}
0\\
0
\end{pmatrix},\underbrace{\begin{pmatrix}
\mathbf 1^\text{T} &\mathbf 0^\text{T} \\
\mathbf 1^\text{T} &\mathbf 1^\text{T}
\end{pmatrix}\begin{pmatrix}
C & \mathbf O\\
\mathbf O & \sigma^2\mathbf I
\end{pmatrix}\begin{pmatrix}
\mathbf 1 &\mathbf 1 \\
\mathbf 0 &\mathbf 1
\end{pmatrix}}_{\begin{pmatrix}
C & C\\
C & \sigma^2\mathbf I
\end{pmatrix}
}\Big)$$
indeed. With
$$\mathbb E[X|Y] = 0 + C (C+\sigma^2\mathbf I)^{-1}
Y $$
and
$$\text{var}(X|Y) = C - C (C+\sigma^2I)^{-1} C
$$
Applying the Woodbury matrix inversion lemma
$$(A+B)^{-1}=A^{-1}-A^{-1}(B^{-1}+A^{-1})^{-1}A^{-1}$$
one gets that
\begin{align*} C - C (C+\sigma^2I)^{-1} C &= C - C (C^{-1}-
C^{-1}(C^{-1}+\sigma^{-2}\mathbf I)^{-1}C^{-1})C\\ &= C - C
+(C^{-1}+\sigma^{-2}\mathbf I)^{-1}\\ &=
(C^{-1}\mathbf I+\sigma^{-2}C^{-1}C)^{-1}\\ &= \sigma^2 C (\sigma^2\mathbf I+C)^{-1}
\end{align*}
The apparent lack of symmetry in the expression may sound suspicious but actually$$C (\sigma^2\mathbf I+C)^{-1} = (\sigma^2\mathbf I+C)^{-1} C$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/481518",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Discrete probability distribution involving curtailed Riemann zeta values $\renewcommand{\Re}{\operatorname{Re}}$ $\renewcommand{\Var}{\operatorname{Var}}$We define the discrete random variable $X$ as having the probability mass function $$f_{X}(k) = \Pr(X=k) = \zeta(k)-1, $$ for $k \geq 2 $.
Here, $\zeta(\cdot)$ is the Riemann zeta function, defined as $$\zeta(s) = \sum_{n=1}^{\infty} n^{-s} $$ for $\Re(s) >1 $.
Now, $X$ is indeed a discrete RV, as we have $$\sum_{k=2}^{\infty} p_k = \sum_{k=2}^{\infty} (\zeta(k)-1) = 1,$$ (which we can find, for example, here) and it is clear that for all $k$ it holds that $$0 \leq p_k \leq 1 .$$
Furthermore, we can find the first and second moments of $X$. The mean amounts to $$E[X] = \sum_{k=2}^{\infty} k \big(\zeta(k)-1\big) = 1+\frac{\pi^{2}}{6} .$$
Moreover, we have $$E[X^{2}] = \sum_{k=2}^{\infty} k^{2} \big( \zeta(k)-1 \big) = 1 + \frac{\pi^{2}}{2} + 2 \zeta(3), $$ so we obtain \begin{align*} \Var(X) &= E[X^2] - E[X]^{2} \\
&= \frac{\pi^2}{6} +2 \zeta(3) - \frac{\pi^4}{36} \\
&= \zeta(2) + 2 \zeta(3) - \frac{5}{2} \zeta(4). \end{align*}
Question: does this discrete random variable involving curtailed Riemann zeta values come up in the literature on probability theory and/or statistics? Does it have any applications?
Note: please note that this RV differs from the Zeta distribution.
| To illustrate Whuber's comment
$$\begin{array}{c|ccccccccc}
& Y = 2 & Y =3& Y=4 & Y=5\\
\hline
X =2 & \frac{1}{2^2} & \frac{1}{3^2} & \frac{1}{4^2} &\frac{1}{5^2} & \dots\\
X =3 & \frac{1}{2^3} & \frac{1}{3^3} & \frac{1}{4^3} & \frac{1}{5^3} & \dots\\
X =4 & \frac{1}{2^4} &\frac{1}{3^4} &\frac{1}{4^4} & \frac{1}{5^4} &\dots\\
\vdots& \\\text{etc.}&
\end{array}$$
And those terms can be seen as the product of a product $P(X=x|Y=y)P(Y=y)$ with a shifted geometric distribution $$P(X=x|Y=y) =
\left(\frac{1}{y}\right)^{x} y(y-1)$$ and some sort of variant of a Zipf distribution $$P(Y=y) =
\frac{1}{y(y-1)}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/604631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Expected value of a random variable Random variable $X$ has the probability density function
\begin{equation*}
f\left( x\right) =\left\{
\begin{array}{ccc}
n\left( \frac{x}{\theta }\right) ^{n-1} & , & 0<x\leqslant \theta \\
n\left( \frac{1-x}{1-\theta }\right) ^{n-1} & , & \theta \leqslant x<1%
\end{array}%
\right.
\end{equation*}
show that if $k\in \mathbb{N}$
\begin{equation*}
\mathrm{E}\left( X^{k}\right) =\frac{n\theta ^{k+1}}{n+k}+\sum%
\limits_{i=0}^{k}\left( -1\right) ^{i}\binom{k}{k-i}\frac{n}{n+i}\left(
1-\theta \right) ^{i+1}
\end{equation*}
It is easy to find the first term of $\mathrm{E}\left( X^{k}\right) $ but i
couldn't find the second one. i think i have to use beta distribution
properties. i tried to simulate the integral
\begin{equation*}
\int\nolimits_{\theta }^{1}nx^{k}\left( \frac{1-x}{1-\theta }\right) ^{n-1}%
\mathrm{d}x
\end{equation*}
to beta pdf using $u=\frac{x-1}{\theta -1}$ transformation but i couldn't
get a reasonable result. After this transformation is applied i have to find
\begin{equation*}
\int\nolimits_{0}^{1}u^{n-1}\left[ 1+u\left( \theta -1\right) \right] ^{k}%
\mathrm{d}x
\end{equation*}
but i couldn't. Also I tried to use the equality
\begin{equation*}
x^{k}=1+\left( x-1\right) \sum\limits_{n=0}^{k-1}x^{n}
\end{equation*}
but i couldn't get the result.
| $$\begin{align*} {\rm E}[X^k] &= \int_{x=0}^\theta x^k n \biggl(\frac{x}{\theta}\biggr)^{\!n-1} \, dx + \int_{x=\theta}^1 x^k n \biggl(\frac{1-x}{1-\theta}\biggr)^{\!n-1} \, dx \\ &= \frac{n}{\theta^{n-1}} \int_{x=0}^\theta x^{n+k-1} \, dx + \frac{n}{(1-\theta)^{n-1}} \int_{x=0}^{1-\theta} (1-x)^k x^{n-1} \, dx \\ &= \frac{n}{\theta^{n-1}} \cdot \frac{\theta^{n+k}}{n+k} + \frac{n}{(1-\theta)^{n-1}} \int_{x=0}^{1-\theta} \sum_{i=0}^{k} \binom{k}{i} (-x)^i x^{n-1} \, dx \\ &= \frac{n \theta^{k+1}}{n+k} + \frac{n}{(1-\theta)^{n-1}} \sum_{i=0}^{k} (-1)^i \binom{k}{k-i} \int_{x=0}^{1-\theta} x^{n+i-1} \, dx \\ &= \frac{n\theta^{k+1}}{n+k} + \frac{n}{(1-\theta)^{n-1}} \sum_{i=0}^k (-1)^i \binom{k}{k-i} \frac{(1-\theta)^{n+i}}{n+i} \\ &= \frac{n\theta^{k+1}}{n+k} + \sum_{i=0}^k (-1)^i \binom{k}{k-i} \frac{n}{n+i} (1-\theta)^{i+1}. \end{align*}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/93972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
K-means++ algorithm I try to implement k-means++, but I'm not sure how it works. I have the following dataset:
(7,1), (3,4), (1,5), (5,8), (1,3), (7,8), (8,2), (5,9), (8,0)
From the wikipedia:
*
*Step 1: Choose one center uniformly at random from among the data points.
let's say the first centroid is 8,0
*Step 2: For each data point x, compute D(x), the distance between x and the nearest center that has already been chosen.
I calculate all each point's distance to tht point nr.9 (8,0)
*
*1 (7,1) distance = (8-7)^2 + (0-1)^2 = (1)^2 + (-1)^2 = 1 + 1 = 2
*2 (3,4) distance = (8-3)^2 + (0-4)^2 = (5)^2 + (-4)^2 = 25 + 16 = 41
*3 (1,5) distance = (8-1)^2 + (0-5)^2 = (7)^2 + (-5)^2 = 49 + 25 = 74
*4 (5,8) distance = (8-5)^2 + (0-8)^2 = (3)^2 + (-8)^2 = 9 + 64 = 73
*5 (1,3) distance = (8-1)^2 + (0-3)^2 = (7)^2 + (-3)^2 = 49 + 9 = 58
*6 (7,8) distance = (8-7)^2 + (0-8)^2 = (1)^2 + (-8)^2 = 1 + 64 = 65
*7 (8,2) distance = (8-8)^2 + (0-2)^2 = (0)^2 + (-2)^2 = 0 + 4 = 4
*8 (5,9) distance = (8-5)^2 + (0-9)^2 = (3)^2 + (-9)^2 = 9 + 81 = 90
*Step 3: Choose one new data point at random as a new center, using a weighted probability distribution where a point x is chosen with probability proportional to $D(x)^2$.
*Step 4: Repeat Steps 2 and 3 until k centers have been chosen.
Could someone explain in detail how to calculate the 3rd step?
| For step 3,
Choose one new data point at random as a new center, using a weighted
probability distribution where a point x is chosen with probability
proportional to $D(x)^2$.
Compute all the $D(x)^2$ values and convert them to an array of cumulative sums. That way each item is represented by a range proportional to its value. Then pick a uniform random number in that range and see which item it corresponds to (using a binary search).
For instance, you have:
D(x)^2 = [2, 41, 74, 73, 58, 65, 4, 90]
cumulative D(x)^2 = [2, 43, 117, 190, 248, 313, 317, 407]
So pick a random number from [0, 407). Say you pick 123.45. It falls in the range [117, 190) which corresponds to the 4th item.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/135403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Equation for the variance inflation factors Following a question asked earlier, the variance inflation factors (VIFs) can be expressed as
$$
\textrm{VIF}_j = \frac{\textrm{Var}(\hat{b}_j)}{\sigma^2} =
[\mathbf{w}_j^{\prime} \mathbf{w}_j - \mathbf{w}_j^{\prime}
\mathbf{W}_{-j} (\mathbf{W}_{-j}^{\prime} \mathbf{W}_{-j})^{-1}
\mathbf{W}_{-j}^{\prime} \mathbf{w}_j]^{-1}
$$
$\mathbf{W}$ is the unit length scaled version of $\mathbf{X}$
Can anyone show me how to get from here to the equation
$$
\textrm{VIF}_j = \frac{1}{1-R_j^2}
$$
$R_j^2$ is the coefficient of multiple determination obtained from regressing $x_j$ on the other regressor variables.
I'm having a lot of troubles getting these matrix operations right...
| Assume all $X$ variables are standardized by the correlation transformation, like you mentioned, unit length scaled version of $\mathbf{X}$. The standardized model does not change the correlation between $X$ variables. $VIF$ can be calculated when standardized transformation of the original linear model is made. Let's denote the design matrix after standardized transformation as
\begin{align*}
\mathbf{X^*} = \begin{bmatrix} 1& X_{11}& \ldots &X_{1,p-1} \\ 1& X_{21}& \ldots &X_{2,p-1} \\ \vdots & \vdots & \vdots & \vdots \\ 1& X_{n1}& \ldots &X_{n,p-1} \\ \end{bmatrix}.
\end{align*}
Then
\begin{align*}
\mathbf{X^{*'}X^*} = \begin{bmatrix} n & \mathbf{0}' \\ \mathbf{0} & \mathbf{r}_{XX} \end{bmatrix},
\end{align*}
where $\mathbf{r}_{XX}$ is the correlation matrix of $X$ variables. We also know that
\begin{align*}
\sigma^2\{\hat{\beta}\}
& = \sigma^2 (\mathbf{X^{*'}X^*})^{-1}\\
& = \sigma^2 \begin{bmatrix} \frac{1}{n} & \mathbf{0}' \\ \mathbf{0} & \mathbf{r}^{-1}_{XX}. \end{bmatrix}\\
\end{align*}
$VIF_k$ for $k=1,2,\ldots,p-1$ is the $k$-th diagonal term of $\mathbf{r}^{-1}_{XX}$. We only need to prove this for $k = 1$ because you can permute the rows and columns of $r_{XX}$ to get the result for other $k$.
Let's define:
\begin{align*}
\mathbf{X}_{(-1)} = \begin{bmatrix} X_{12}&\ldots&X_{1,p-1} \\X_{22}&\ldots&X_{2,p-1}\\ \vdots & \vdots & \vdots \\ X_{n2}&\ldots&X_{n,p-1} \\ \end{bmatrix}, \mathbf{X}_1 = \begin{bmatrix} X_{11} \\ X_{21} \\ \vdots \\ X_{n1} \\ \end{bmatrix}.
\end{align*}
Note that both matrices are different from design matrices. Since we only care about the coefficients of $X$ variables, the $1$-vector of a design matrix can be ignored in our calculation. Hence, by using Schur's complement,
\begin{align*}
r^{-1}_{XX} (1,1)
& = (r_{11} - r_{1\mathbf{X}_{(-1)}} r^{-1}_{\mathbf{X}_{(-1)}\mathbf{X}_{(-1)}} r_{\mathbf{X}_{(-1)}1})^{-1} \\
& = (r_{11} - [r_{1\mathbf{X}_{(-1)}} r^{-1}_{\mathbf{X}_{(-1)}\mathbf{X}_{(-1)}}] r_{\mathbf{X}_{(-1)}\mathbf{X}_{(-1)}} [r^{-1}_{\mathbf{X}_{(-1)}\mathbf{X}_{(-1)}} r_{\mathbf{X}_{(-1)}1}])^{-1} \\
& = (1-\beta_{1\mathbf{X}_{(-1)}}' \mathbf{X}_{(-1)}' \mathbf{X}_{(-1)} \beta_{1\mathbf{X}_{(-1)}} )^{-1},
\end{align*}
where $\beta_{1\mathbf{X}_{(-1)}}$ is the regression coefficients of $X_1$ on $X_2, \ldots, X_{p-1}$ except the intercept. In fact, the intercept should be the origin, since all $X$ variables are standardized with mean zero.
On the other hand, (it would be more straightforward if we can write everything in explicit matrix form)
\begin{align*}
R_1^2
& = \frac{SSR}{SSTO} = \frac{\beta_{1\mathbf{X}_{(-1)}}' \mathbf{X}_{(-1)}' \mathbf{X}_{(-1)} \beta_{1\mathbf{X}_{(-1)}}}{1} \\
& = \beta_{1\mathbf{X}_{(-1)}}' \mathbf{X}_{(-1)}' \mathbf{X}_{(-1)} \beta_{1\mathbf{X}_{(-1)}}.
\end{align*}
Therefore
\begin{align*}
VIF_1 = r^{-1}_{XX} (1,1) = \frac{1}{1-R_1^2}.
\end{align*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/244468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Probability of a person correctly guessing at least one number out of the two number another person chooses
Person A randomly chooses a number from 1 to 5 (inclusive) twice, so A ends up with 2 numbers chosen (can be the same number). Person B also makes a random choice from that list (only 1 number). What's the probability that B's choice match at least one of A's choices?
My interpretation of the question is: what's the probability that a randomly chosen number (call it $c$) between 1 and 5 is in the random set $[a, b],~ a $ and $b $are between 1 and 5.
My attempt of solving this question is the following:
There are 2 cases: 1.$ a$ and $b$ are different numbers; 2. $a$ and $b$ are the same.\
case 1:
if $a = b$, $\mathbb{P}[c \in \{a, b\}] = \mathbb{P}[c = a] = \frac{1}{5} $
case 2:
if $a \not= b$, $\mathbb{P}[c \in \{a, b\}] = \mathbb{P}[c = a] + \mathbb{P}[c = b]= \frac{2}{5} $
The probability of case 1 occurring is $\frac{1}{5}$. The calculation is similar to case 1. And the probability of case 2 occurring is $\frac{4}{5}$ since its the complement of case 1. Then, the answer is $$\mathbb{P}[\text{case 1}] \cdot\frac{1}{5} + \mathbb{P}[\text{case 2}]\cdot\frac{2}{5} = \frac{1}{5} \cdot \frac{1}{5} + \frac{4}{5} \cdot\frac{2}{5} = \frac{3}{5}$$
However, the correct answer is $\frac{9}{25}$. This is confirmed by a simulation I ran.
What did I do wrong?
| Your reasoning is correct up to the very last line:
$$\mathbb{P}[\text{case 1}] \cdot\frac{1}{5} + \mathbb{P}[\text{case 2}]\cdot\frac{2}{5} = \frac{1}{5} \cdot \frac{1}{5} + \frac{4}{5} \cdot\frac{2}{5}$$
But this is not equal to $\frac{3}{5}$.
Instead:
$$\frac{1}{5} \cdot \frac{1}{5} + \frac{4}{5} \cdot\frac{2}{5} = \frac{1 \cdot 1}{25} + \frac{4 \cdot 2}{25} = \frac{9}{25}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/586515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to get the $INR(x_i)$ in PCA, the relative contribution of $x_i$ to the total inertia? Given the following data array :
$$\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
J/I&1 & 2 & 3 & 4 & 5 & 6\\
\hline
x & 1 & 0 & 0 & 2 & 1 & 2\\
y & 0 & 0 & 1 & 2 & 0 & 3\\
z & 0 & 1 & 2 & 1 & 0 & 2\\
\hline
\end{array}$$
I can get the following values for the centered data $Y$ along with the variance
$$\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
&1 & 2 & 3 & 4 & 5 & 6 & v_2 & v_3\\
\hline
\mbox{x }&1 & 0 & 0 & 2 & 1 & 2 & 1/\sqrt{6} & 1/\sqrt{2}\\
\mbox{y }&0 & 0 & 1 & 2 & 0 & 3 & 2/\sqrt{6} & 0\\
\mbox{z }&0 & 1 & 2 & 1 & 0 & 2 & 1/\sqrt{6} & -1/\sqrt{2}\\
\hline
\end{array}$$
From there I can get the principal components value :
\begin{align}
Vv_2&=
\begin{pmatrix}4 & 4 & 0\\
4 & 8 & 4\\
0 & 4 & 4\end{pmatrix}
\frac{1}{\sqrt 6}\begin{pmatrix}
1\\2\\1
\end{pmatrix}\\
&=\frac{2}{\sqrt 6}
\begin{pmatrix}
1\\2\\1
\end{pmatrix}
\end{align}
$PC_1$ value is therefore $2$.
\begin{align}
Vv_3&=
\begin{pmatrix}4 & 4 & 0\\
4 & 8 & 4\\
0 & 4 & 4\end{pmatrix}
\frac{1}{\sqrt 2}\begin{pmatrix}
1\\0\\-1
\end{pmatrix}\\
&=\frac{2}{3\sqrt 2}
\begin{pmatrix}
1\\0\\-1
\end{pmatrix}
\end{align}
$PC_1$ value is therefore $\frac{2}{3}$
$$\begin{array}{|c|c|c|c|c|c|c|c|c|}
\hline
&1 & 2 & 3 & 4 & 5 & 6 & v_2 & v_3\\
\hline
\mbox{value $PC_1$ }&&&&&&&2&\\
\mbox{Value $PC_2$ }&&&&&&&&2/3\\
\mbox{value coef INR }&&&&&&&&\\
\mbox{value coef CTR }&&&&&&&&\\
\mbox{value coef COR }&&&&&&&&\\
\hline
\end{array}$$
How to get the INR (I think it's a French acronym) the contribution of an individual $x_i$ to the total inertia $I_T$:
$$INR(i)=\frac{p_id(0,y_i)^2}{I_T}$$
With $d$ being usually the euclidean distance. We can deduce from the definition that $\sum INR(i)=1$.
\begin{align}
INR(1) &= \frac{1}{6}\times\frac{(-1)²+1²}{2+\frac{2}{3}} = \frac{1}{4}\\
INR(2) &= \frac{1}{6}\times\frac{(-1)²+(-1)²}{2+\frac{2}{3}} = \frac{1}{4}\\
INR(3) &= \frac{1}{6}\times\frac{1²+1²}{2+\frac{2}{3}} = \frac{1}{4}\\
INR(4) &= \frac{1}{6}\times\frac{1²+2²+1²}{2+\frac{2}{3}} = \frac{2}{3}???\\
\end{align}
It would be much more than $1$ now.
| If I'm reading your question correctly, you're looking for inertia around an arbitrary point in your data cloud. The can be formulated as: $I_g-||\bar x-a||^2$. where. $I_g$ here is total inertia, $a$ is particular point in question.
Here's a source on that: with derivation included. Page 8-9.
https://cedric.cnam.fr/fichiers/art_1827.pdf
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/277183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the UMVUE of $\frac{\mu^2}{\sigma}$ where $X_i\sim\mathsf N(\mu,\sigma^2)$
Suppose $X_1, ..., X_4$ are i.i.d $\mathsf N(\mu, \sigma^2)$
random variables. Give the UMVUE of $\frac{\mu^2}{\sigma}$ expressed in terms of $\bar{X}$, $S$, integers, and $\pi$.
Here is a relevant question.
I first note that if $X_1,...,X_n$ are i.i.d $\mathsf N(\mu,\sigma^2)$ random variables having pdf
$$\begin{align*}
f(x\mid\mu,\sigma^2)
&=\frac{1}{\sqrt{2\pi\sigma^2}}\text{exp}\left(-\frac{(x-\mu)^2}{2\sigma^2}\right)\\\\
&=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{\mu^2}{2\sigma^2}}\text{exp}\left(-\frac{1}{2\sigma^2}x^2+\frac{\mu}{\sigma^2}x\right)
\end{align*}$$
where $\mu\in\mathbb{R}$ and $\sigma^2\gt0$, then
$$T(\vec{X})=\left(\sum_{i=1}^n X_i^2, \sum_{i=1}^n X_i\right)$$
are sufficient statistics and are also complete since $$\{\left(-\frac{1}{2\sigma^2},\frac{\mu}{\sigma^2}\right):\mu\in\mathbb{R}, \sigma^2\gt0\}=(-\infty,0)\times(-\infty,\infty)$$
contains an open set in $\mathbb{R}^2$
I also note that the sample mean and sample variance are stochastically independent and so letting
$$\overline{X^2}=\frac{1}{n}\sum_{i=1}^n X_i^2$$
$$\overline{X}^2=\frac{1}{n}\sum_{i=1}^n X_i$$
we have
$$\mathsf E\left(\frac{\overline{X^2}}{S}\right)=\mathsf E\left(\overline{X^2}\right)\cdot\mathsf E\left(\frac{1}{S}\right)=\overline{X^2}\cdot\mathsf E\left(\frac{1}{S}\right)$$
It remains only to find $\mathsf E\left(\frac{1}{S}\right)$
We know that $$(n-1)\frac{S^2}{\sigma^2}\sim\chi_{n-1}^2$$
Hence
$$\begin{align*}
\mathsf E\left(\frac{\sigma}{S\sqrt{3}}\right)
&=\int_0^{\infty} \frac{1}{\sqrt{x}} \cdot\frac{1}{\Gamma(1.5)2^{1.5}}\cdot\sqrt{x}\cdot e^{-x/2}dx\\\\
&=\frac{4}{\sqrt{\pi}\cdot2^{1.5}}
\end{align*}$$
So $$\mathsf E\left(\frac{1}{S}\right)=\frac{4\sqrt{3}}{\sqrt{\pi}\cdot 2^{1.5}\cdot \sigma}$$
But since $\mathsf E(S)\neq\sigma$ I don't think I can just plug in $S$ for $\sigma$ here.
I have that since $\mathsf E\left(\overline{X^2}\right)=\mathsf{Var}\left(\overline{X}\right)+\mathsf E\left(\bar{X}\right)^2=\frac{\sigma^2}{4}+\mu^2$
Hence
$$\sigma=\sqrt{4\left(E\left(\overline{X^2}\right)-E\left(\overline{X}\right)^2\right)}=\sqrt{4\left(\overline{X^2}-\overline{X}^2\right)}$$
Hence the UMVUE of $\frac{\mu^2}{\sigma}$ is
$$\frac{4\sqrt{3}\cdot\overline{X^2}}{\sqrt{\pi}\cdot 2^{1.5}\cdot \sqrt{4\left(\overline{X^2}-\overline{X}^2\right)}}=\frac{\sqrt{\frac{3}{2\pi}}\left(\frac{S^2}{4}+\bar{X}^2\right)}{\sqrt{\frac{S^2}{4}}}$$
Is this a valid solution?
| I have skipped some details in the following calculations and would ask you to verify them.
As usual, we have the statistics $$\overline X=\frac{1}{4}\sum_{i=1}^4 X_i\qquad,\qquad S^2=\frac{1}{3}\sum_{i=1}^4(X_i-\overline X)^2$$
Assuming both $\mu$ and $\sigma$ are unknown, we know that $(\overline X,S^2)$ is a complete sufficient statistic for $(\mu,\sigma^2)$. We also know that $\overline X$ and $S$ are independently distributed.
As you say,
\begin{align}
E\left(\overline X^2\right)&=\operatorname{Var}(\overline X)+\left(E(\overline X)\right)^2
\\&=\frac{\sigma^2}{4}+\mu^2
\end{align}
Since we are estimating $\mu^2/\sigma$, it is reasonable to assume that a part of our UMVUE is of the form $\overline X^2/S$. And for evaluating $E\left(\frac{\overline X^2}{S}\right)=E(\overline X^2)E\left(\frac{1}{S}\right)$, we have
\begin{align}
E\left(\frac{1}{S}\right)&=\frac{\sqrt{3}}{\sigma}\, E\left(\sqrt\frac{\sigma^2}{3\,S^2}\right)
\\\\&=\frac{\sqrt{3}}{\sigma}\, E\left(\frac{1}{\sqrt Z}\right)\qquad\qquad,\,\text{ where }Z\sim\chi^2_{3}
\\\\&=\frac{\sqrt{3}}{\sigma}\int_0^\infty \frac{1}{\sqrt z}\,\frac{e^{-z/2}z^{3/2-1}}{2^{3/2}\,\Gamma(3/2)}\,dz
\\\\&=\frac{1}{\sigma}\sqrt\frac{3}{2\pi}\int_0^\infty e^{-z/2}\,dz
\\\\&=\frac{1}{\sigma}\sqrt\frac{6}{\pi}
\end{align}
Again, for an unbiased estimator of $\sigma$, $$E\left(\frac{1}{2}\sqrt\frac{3\pi}{2}S\right)=\sigma$$
So,
\begin{align}
E\left(\frac{\overline X^2}{S}\right)&=E\left(\overline X^2\right)E\left(\frac{1}{S}\right)
\\&=\left(\mu^2+\frac{\sigma^2}{4}\right)\frac{1}{\sigma}\sqrt\frac{6}{\pi}
\\&=\sqrt\frac{6}{\pi}\left(\frac{\mu^2}{\sigma}+\frac{\sigma}{4}\right)
\end{align}
Or, $$E\left(\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{\frac{1}{2}\sqrt\frac{3\pi}{2}S}{4}\right)=\frac{\mu^2}{\sigma}$$
Hence our unbiased estimator based on the complete sufficient statistic $(\overline X,S^2)$ is
\begin{align}
T(X_1,X_2,X_3,X_4)&=\sqrt{\frac{\pi}{6}}\,\frac{\overline X^2}{S}-\frac{1}{8}\sqrt\frac{3\pi}{2}S
\end{align}
By Lehmann-Scheffe, $T$ is the UMVUE of $\mu^2/\sigma$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/373936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Density of sum of truncated normal and normal distribution Suppose that $\varepsilon\sim N(0, \sigma_\varepsilon)$ and $\delta\sim N^+(0, \sigma_\delta)$. What is the density function for $X = \varepsilon - \delta$?
This proof apparently appeared in a Query by M.A. Weinstein in Technometrics 6 in 1964, which stated that the density of $X$ is given by
$$f_X(x) = \frac{2}{\sigma} \phi\left(\frac{x}{\sigma}\right) \left(1 - \Phi\left(\frac{x\lambda}{\sigma}\right)\right),$$
where $\sigma^2 = \sigma_\varepsilon^2 + \sigma_\delta^2$ and $\lambda = \sigma_\delta / \sigma_\varepsilon$ and $\phi$ and $\Phi$ are the standard normal density and distribution functions, respectively. However, that paper is very difficult to find online. What is the proof that the density of $X$ takes the above form?
| Ultimately I needed to work through the algebra a bit more to arrive at the specified form. For posterity, the full proof is given below.
Proof
First consider the distribution function of $X$, which is given by
$$F(x) = \Pr(X \leq x) = \Pr(\varepsilon - \delta \leq x)$$
$$= \int_{\varepsilon - \delta \leq x} f_\varepsilon(\varepsilon) f_\delta(\delta) d\delta d\varepsilon$$
$$= \int_{\delta\in\mathbb{R}^+} f_\delta(\delta) \int_{\varepsilon\in (-\infty, x + \delta]} f_\varepsilon(\varepsilon) d\varepsilon d\delta.$$
Substituting in known density functions yields
$$\int_0^\infty 2\phi(\delta | 0, \sigma_\delta) \int_{-\infty}^{x + \delta} \phi(\varepsilon | 0, \sigma_\varepsilon) d\varepsilon d\delta$$
$$= 2\int_0^\infty \phi(\delta | 0, \sigma_\delta) \Phi(x + \delta | 0, \sigma_\varepsilon) d\delta.$$
The density of $X$ is then given by
$$f(x) = \frac{dF}{dx} = 2\int_0^\infty \phi(\delta | 0, \sigma_\delta) \phi(x + \delta | 0, \sigma_\varepsilon) d\delta.$$
Using Sage to perform this integration, the result is given by
$$f(x) = -\frac{{\left(\operatorname{erf}\left(\frac{\sigma_{\delta} x}{2 \, \sqrt{\frac{1}{2} \, \sigma_{\delta}^{2} + \frac{1}{2} \, \sigma_{\varepsilon}^{2}} \sigma_{\varepsilon}}\right) e^{\left(\frac{\sigma_{\delta}^{2} x^{2}}{2 \, {\left(\sigma_{\delta}^{2} \sigma_{\varepsilon}^{2} + \sigma_{\varepsilon}^{4}\right)}}\right)} - e^{\left(\frac{\sigma_{\delta}^{2} x^{2}}{2 \, {\left(\sigma_{\delta}^{2} \sigma_{\varepsilon}^{2} + \sigma_{\varepsilon}^{4}\right)}}\right)}\right)} e^{\left(-\frac{x^{2}}{2 \, \sigma_{\varepsilon}^{2}}\right)}}{2 \, \sqrt{\pi} \sqrt{\frac{1}{2} \, \sigma_{\delta}^{2} + \frac{1}{2} \, \sigma_{\varepsilon}^{2}}}.$$
Defining $\lambda = \sigma_\delta / \sigma_\varepsilon$ and $\sigma^2 = \sigma_\varepsilon^2 + \sigma_\delta^2$, the following can be simplified:
$$\frac{\sigma_{\delta} x}{2 \, \sqrt{\frac{1}{2} \, \sigma_{\delta}^{2} + \frac{1}{2} \, \sigma_{\varepsilon}^{2}} \sigma_{\varepsilon}} = \frac{\lambda x}{\sigma\sqrt{2}} = \frac{x}{(\sigma / \lambda) \sqrt{2}},$$
$$\frac{\sigma_{\delta}^{2} x^{2}}{2 \, {\left(\sigma_{\delta}^{2} \sigma_{\varepsilon}^{2} + \sigma_{\varepsilon}^{4}\right)}} = \frac{\lambda^2 x^2}{2\sigma^2} = \frac{x^2}{2(\sigma / \lambda)^2}.$$
Thus,
$$f(x) = -\frac{\exp\left(\frac{x^2}{2(\sigma / \lambda)^2}\right)\left(\operatorname{erf}\left(\frac{x}{(\sigma / \lambda) \sqrt{2}}\right) - 1\right)\exp\left(-\frac{x^2}{2\sigma_\varepsilon^2}\right)}{\sigma\sqrt{2\pi}}$$
$$= -\frac{\left(\operatorname{erf}\left(\frac{x}{(\sigma / \lambda) \sqrt{2}}\right) - 1\right) \exp\left(-x^2\left(\frac{1}{2\sigma_\varepsilon^2} - \frac{1}{2(\sigma / \lambda)^2}\right)\right)}{\sigma\sqrt{2\pi}}.$$
Now,
$$\operatorname{erf}\left(\frac{x}{(\sigma / \lambda) \sqrt{2}}\right) - 1 = \left(1 + \operatorname{erf}\left(\frac{x}{(\sigma / \lambda) \sqrt{2}}\right)\right) - 2 = 2\left(\frac{1}{2}\left(1 + \operatorname{erf}\left(\frac{x}{(\sigma / \lambda) \sqrt{2}}\right)\right) - 1\right)$$
$$= 2\left(\Phi\left(\frac{x\lambda}{\sigma}\right) - 1\right) = -2\left(1 - \Phi\left(\frac{x\lambda}{\sigma}\right)\right).$$
Also,
$$\frac{1}{2\sigma_\varepsilon^2} - \frac{1}{2(\sigma / \lambda)^2} = \frac{1}{2\sigma_\varepsilon^2} - \frac{\sigma_\delta^2}{2\sigma_\varepsilon^2(\sigma_\delta^2 + \sigma_\varepsilon^2)} = \frac{\sigma_\delta^2 + \sigma_\varepsilon^2 - \sigma_\delta^2}{2\sigma_\varepsilon^2(\sigma_\delta^2 + \sigma_\varepsilon^2)} = \frac{\sigma_\varepsilon^2}{2\sigma_\varepsilon^2(\sigma_\delta^2 + \sigma_\varepsilon^2)} = \frac{1}{2\sigma^2}.$$
So,
$$f(x) = 2\left(1 - \Phi\left(\frac{x\lambda}{\sigma}\right)\right)\frac{\exp\left(-\frac{x^2}{2\sigma^2}\right)}{\sigma\sqrt{2\pi}} = 2\left(1 - \Phi\left(\frac{x\lambda}{\sigma}\right)\right) \phi(x | 0, \sigma) = \frac{2}{\sigma}\phi\left(\frac{x}{\sigma}\right) \left(1 - \Phi\left(\frac{x\lambda}{\sigma}\right)\right).$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/419722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
An inequality for a bi-modal hypergeometric distribution Say $X$ has a hypergeometric distribution with parameters $m$, $n$ and $k$, with $k\leq n<\frac12m$.
I know that $X$ has a dual mode if and only if $d=\frac{(k+1)(n+1)}{m+2}$ is integer. In that case $P(X=d)=P(X=d-1)$ equals the maximum probability.
I am wondering if I can say anything about $P(X=d+1)$ versus $P(X=d-2)$ then. When is the former higher than the latter? I.e. when is:
$P(X=d+1)>P(X=d-2)$
Always? I tried many combinations programmatically and did not find any counterexample.
So far I have found:
$\frac{P(X=d+1)}{P(X=d-2)}=\frac{(k-d+2)(k-d+1)(k-d)(n-d+2)(n-d+1)(n-d)}{(d+1)d(d-1)(m-k-n+d+1)(m-k-n+d)(m-k-n+d-1)}$
Because $d=\frac{(k+1)(n+1)}{m+2}$, this can be simplified to:
$\frac{P(X=d+1)}{P(X=d-2)}=\frac{(k-d+2)(k-d)(n-d+2)(n-d)}{(d+1)(d-1)(m-k-n+d+1)(m-k-n+d-1)}$
I have tried further combining this with $d=\frac{(k+1)(n+1)}{m+2}$ being integer, but that gets quite complex and gives me no further clue.
I feel there is something relatively easy to prove here...?
For $n=\frac12m$, $P(X=d+1)=P(X=d-2)$ due to symmetry.
| In the case you are considering you have $P(X=d)=P(X=d-1)$
so let's consider the sign of $$\frac{P(X=d+1)}{P(X=d)}-\frac{P(X=d-2)}{P(X=d-1)} = \tfrac{(k-d)(n-d)}{(d+1) (m-k-n+d+1)} -\tfrac{ (d-1) (m-k-n+d-1)}{(k-d+2)(n-d+2)} \\= \tfrac{(k-d)(n-d)(k-d+2)(n-d+2)-(d-1) (m-k-n+d-1)(d+1) (m-k-n+d+1)}{(d+1) (m-k-n+d+1)(k-d+2)(n-d+2)}$$
The denominator is positive so does not affect the sign. In the numerator, we can make the substitution $d=\frac{(k+1)(n+1)}{m+2}$ and then multiply through by the positive $(m+2)^4$. Expanding the result and factorising gives
$$\tfrac{m^6 +(8-2n-2k)m^5 +(24-16n-16k+kn)m^4 +(32-48n-48k+32kn)m^3 +(16-64n-64k+96kn)m^2 +(-32n-32k+128kn)m +64kn}{\text{something positive}} \\ = \frac{(m+2)^4(m-2n)(m-2k)}{\text{something positive}} $$
and this is positive, i.e. $P(X=d+1) > P(X=d-2)$, when $k\leq n<\frac12m$. $\blacksquare$
As a check, the difference is actually $\frac{(m-2n)(m-2k)}{(d+1) (m-k-n+d+1)(k-d+2)(n-d+2)}$.
It is also positive when $n\lt k<\frac12m$, and when both $k > \frac12m$ and $n>\frac12m$.
It is negative , i.e. $P(X=d+1) < P(X=d-2)$ when $k\lt \frac12m <n$ or $n\lt \frac12m <k$.
Finally, it is zero, i.e. $P(X=d+1) = P(X=d-2)$ when $k= \frac12m$ or $n= \frac12m$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/458289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Hellinger distance for two shifted log-normal distributions If I am not mistaken, Hellinger distance between P and Q is generally given by:
$$
H^2(P, Q) = \frac12 \int \left( \sqrt{dP} - \sqrt{dQ} \right)^2
.$$
If P and Q, however, are two differently shifted log-normal distributions of the following form
$$
{\frac {1}{(x-\gamma)\sigma {\sqrt {2\pi \,}}}}\exp \left(-{\frac {[\ln (x-\gamma)-\mu ]^{2}}{2\sigma ^{2}}}\right)
,$$
how would the Hellinger distance then be formed?
in terms of: $$\gamma1,\gamma2, \mu1, \mu2 .. etc$$
| Note that
\begin{align}
H^2(P, Q)
&= \frac12 \int (\sqrt{dP} - \sqrt{dQ})^2
\\&= \frac12 \int dP + \frac12 \int dQ - \int \sqrt{dP} \sqrt{dQ}
\\&= 1 - \int \sqrt{dP} \sqrt{dQ}
,\end{align}
and that the density function is 0 if $x \le \gamma$.
Thus your question asks to compute
\begin{align}
1 - H^2(P, Q)
&= \int_{\max(\gamma_1,\gamma_2)}^\infty
\sqrt{\frac{1}{(x - \gamma_1) \sigma_1 \sqrt{2 \pi}} \exp\left( - \frac{\left(\ln(x - \gamma_1) - \mu_1\right)^2}{2 \sigma_1^2} \right)}
\\&\qquad\qquad
\sqrt{\frac{1}{(x - \gamma_2) \sigma_2 \sqrt{2 \pi}} \exp\left( - \frac{\left(\ln(x - \gamma_2) - \mu_2\right)^2}{2 \sigma_2^2} \right)}
dx
\\&= \sqrt{\frac{1}{2 \pi \sigma_1 \sigma_2}} {\huge\int}_{\max(\gamma_1,\gamma_2)}^\infty
\frac{\exp\left( - \frac{\left(\ln(x - \gamma_1) - \mu_1\right)^2}{4 \sigma_1^2} - \frac{\left(\ln(x - \gamma_2) - \mu_2\right)^2}{4 \sigma_2^2} \right)}{\sqrt{(x - \gamma_1)(x - \gamma_2)}}
dx
.\end{align}
Assume (WLOG) that $\gamma_1 \ge \gamma_2$, and
do a change of variables to $y = \ln(x - \gamma_1)$, $dy = \frac{1}{x - \gamma_1} dx$. Let $\Delta = \gamma_1 - \gamma_2$, so we have
$
x - \gamma_2
= \exp(y) + \Delta
.$
Then we get $1 - H^2(P, Q)$ as
\begin{align}
\sqrt{\frac{1}{2 \pi \sigma_1 \sigma_2}} \int_{-\infty}^\infty
\exp\left( - \frac{\left(y - \mu_1\right)^2}{4 \sigma_1^2} - \frac{\left(\ln(\exp(y) + \Delta) - \mu_2\right)^2}{4 \sigma_2^2} \right)
\sqrt{\frac{\exp(y)}{\exp(y) + \Delta}}
dy
.\end{align}
If $\gamma_1 = \gamma_2$ so $\Delta = 0$, this works out to
$$
H^2(P, Q)
= 1 -
\sqrt{\frac{2 \sigma_1 \sigma_2}{\sigma_1^2 + \sigma_2^2}}
\exp\left( - \frac{(\mu_1 - \mu_2)^2}{4 (\sigma_1^2 + \sigma_2^2)} \right)
.$$
For $\Delta \ne 0$, though, neither I nor Mathematica made any immediate headway.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/361280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Range of integration for joint and conditional densities Did I mess up the range of integration in my solution to the following problem ?
Consider an experiment for which, conditioned on $\theta,$ the density of $X$ is
\begin{align*}
f_{\theta}(x) = \frac{2x}{\theta^2},\,\,0 < x< \theta.
\end{align*}
Suppose the prior density for $\theta$ is
\begin{align*}
\pi(\theta) = 1,\,\,\,0 \leq \theta \leq 1
\end{align*}
Find the posterior density of $\theta,$ then find $\mathbb{E}[\theta|X]$. Do the same for $X = (X_1,\dots, X_n)$
where $X_1,\dots, X_n$ are i.i.d and have the density above.\
The joint density of $\theta$ and $X$ is given by
\begin{align*}
f_{\theta}(x)\pi(\theta) = \frac{2x}{\theta^2},\,\,0 < x< \theta \leq 1.
\end{align*}
and so the marginal density $g(x)$ of $X$ is given by
\begin{align*}
g(x)=\int_{x}^1f_{\theta}(x)\pi(\theta)d\theta &= \int_{x}^1\frac{2x}{\theta^2}d\theta\\
&=2x\int_{x}^1\theta^{-2}d\theta\\
&=2x[-\frac{1}{\theta}]_x^1\\
&= -2(x -1),\,\,\,0 <x<1
\end{align*}
So the posterior density of $\theta$ is
\begin{align*}
f_x(\theta) = \frac{f_{\theta}(x)\pi(\theta)}{g(x)} = \frac{-x}{(x-1)\theta^2}, \,\, x < \theta \leq 1
\end{align*}
and
\begin{align*}
\mathbb{E}[\theta|X]&= \int_{x}^1\frac{-x}{x-1}\theta^{-1}d\theta\\
&=\frac{-x}{x-1}\ln\theta|_x^1\\
&= \frac{x}{x-1}\ln x
\end{align*}
Now let $X = (X_1,\dots, X_n)$ where each $X_i$ has the density above. Then the joint density is
\begin{align*}
f_{\theta}(x)\pi(\theta) = \prod_{i = 1}^n\frac{2x_i}{\theta^2},\,\, 0 < x_{[1]} \leq x_{[n]} < \theta \leq 1
\end{align*}
and so the marginal density $g(x)$ of $X$ is given by
\begin{align*}
g(x)=\int_{x_{[n]}}^1f_{\theta}(x)\pi(\theta)d\theta &= \int_{x_{[n]}}^1\prod_{i = 1}^n\frac{2x_i}{\theta^2}d\theta\\
&=\prod_{i = 1}^n2x_i\int_{x_{[n]}}^1\theta^{-2}d\theta\\
&=\prod_{i = 1}^n2x_i[-\frac{1}{\theta}]_{x_{[n]}}^1\\
&=\Bigg(\frac{1}{x_{[n]}} -1\Bigg) \prod_{i = 1}^n2x_i,\,\,\,0 <x<1
\end{align*}
and so the posterior density is
\begin{align*}
f_{x}(\theta) = \Bigg(\prod_{i = 1}^n\frac{2x_i}{\theta^2}\Bigg) \cdot \Bigg( \Bigg(\frac{1}{x_{[n]}} -1\Bigg) \prod_{i = 1}^n2x_i \Bigg)^{-1}
\end{align*}
| The univariate case seems correct to me. The multivariate case should be as follows:
$$\begin{align*}
g(x)=\int_{x_{[n]}}^1f_{\theta}(x)\pi(\theta)d\theta &= \int_{x_{[n]}}^1\prod_{i = 1}^n\left(\frac{2x_i}{\theta^2}\right)d\theta\\
&=\left(\prod_{i = 1}^n2x_i\right)\int_{x_{[n]}}^1\theta^{-2n}d\theta\\
&=\left(\prod_{i = 1}^n2x_i\right)\left[-\frac{1}{(2n-1)\theta^{2n-1}}\right]_{x_{[n]}}^1\\
&=\left(\frac{1}{2n-1}\right)\Bigg(\frac{1}{\left(x_{[n]}\right)^{2n-1}} -1\Bigg) \left(\prod_{i = 1}^n2x_i\right),\,\,\,0 <x<1
\end{align*}$$
Then, the posterior is
$$\begin{align*}
f_{x}(\theta) = \frac{2n-1}{\theta^{2n}} \Bigg(\frac{1}{\left(x_{[n]}\right)^{2n-1}} -1\Bigg)^{-1}, x_{[n]}<\theta\leq 1
\end{align*}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/431601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
MLE Derivation for AR Model So I am trying to derive the MLE for an AR(1) model. Here are my thoughts thus far:
The AR process is: $z_t = \delta + \psi_1z_{t-1} + \epsilon_t$
The expected value of $z_t = \frac{\delta}{1 - \psi_1}$.
The variance of $z_t = \frac{1}{1 - \psi_1^2}$.
So this is where I am getting caught up.
I have the PDF of $z_t$ as:
\begin{align}
f(z_t;\theta) &= (2 \pi \sigma^2)^{-\frac{1}{2}}
\exp \left [-\frac{1}{2} \left (\frac{z_t - \mathbb{E}[z_t]}{\sqrt{\sigma^2}}\right ) \right] \\
&= \left (2 \pi \frac{1}{1 - \psi_1^2} \right )^{-\frac{1}{2}}
\exp \left [-\frac{1}{2} \left (\frac{z_t - \frac{\delta}{1 - \psi_1}}
{\sqrt{\frac{1}{1 - \psi_1^2}}} \right )^2 \right] \\
&= \left (2 \pi \frac{1}{1 - \psi_1^2} \right )^{-\frac{1}{2}}
\exp \left [-\frac{1}{2} \left (\frac{ \left(z_t - \frac{\delta}{1 - \psi_1} \right )^2}{\frac{1}{1 - \psi_1^2}} \right ) \right] \\
&= \left (2 \pi \frac{1}{1 - \psi_1^2} \right )^{-\frac{1}{2}}
\exp \left [-\frac{1 - \psi_1^2}{2} \left( z_t - \frac{\delta}{1 - \psi_1} \right)^2 \right]
\end{align}
Now, can I assume i.i.d. here? I feel like no because then I would have a time series that is just white noise right? However, if I did assume i.i.d., I would have:
$\mathscr{L} = \prod_{t=1}^T \left (2 \pi \frac{1}{1 - \psi_1^2} \right )^{-\frac{1}{2}}
\exp \left [-\frac{1 - \psi_1^2}{2} \left( z_t - \frac{\delta}{1 - \psi_1} \right)^2 \right]$
And then from here what exactly would my log likelihood function be? I feel like I am totally screwing this up but this is what I have for it:
$\ln \mathscr{L} = -\frac{T}{2} \ln \left ( 2 \pi \frac{1}{1 - \psi_1^2} \right )
- \frac{(1 - \psi_1^2) \sum_{t=1}^T \left (z_t - \frac{\delta}{1 - \psi_1} \right )^2}{2}$
Any help is greatly appreciated! Thank you!!
| I'm not directly answering your question, but a quick note on the construction of the likelihood function in your model. The likelihood of a $T$-sized sample of $\mathbf{e} = \left[ \epsilon_{1}, \, \epsilon_{2}, \ldots, \, \epsilon_{T} \right]^{\mathsf{T}}$ of i.i.d. normal distributed $\epsilon \sim N(0,\sigma^{2})$ is
$$
L(\mathbf{e}) = (2 \pi \sigma^{2})^{\frac{T}{2}} \exp \left[ \frac{- \mathbf{e}^\mathsf{T} \mathbf{e}}{2\sigma^{2}} \right]
$$
But obviously you observe $\mathbf{z} = \left[ z_{1}, \, z_{2}, \ldots, \, z_{T} \right]^{\mathsf{T}}$ instead of $\mathbf{e}$. From your AR(1) setup you have $\mathbf{e} = \mathbf{G} \mathbf{z} - \delta \mathbf{I}_{T}$ where
$$
\mathbf{G} = \begin{bmatrix}
\sqrt{1-\psi_{1}^2} & 0 & 0 & \cdots & 0 \\
-\psi_{1} & 1 & 0 & \cdots & 0 \\
0 & -\psi_{1} & 1 & \cdots & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
0 & 0 & 0 & \cdots & 1
\end{bmatrix},
$$
(see Prais & Winsten, eqn. 15). Then
$$L(\mathbf{z}) = L(\mathbf{e}) \left| \frac{d \mathbf{e}}{d\mathbf{z}} \right|$$
where $\left| \frac{d \mathbf{e}}{d\mathbf{z}} \right|$ is the Jacobian (determiant) of the transformation, in this case $\operatorname{det} \mathbf{G}$, which works out to be $\left| 1 -\psi_{1}^{2} \right|^{\frac{1}{2}}$ because of all the off-diagonal zeros in $\mathbf{G}$. See Beach & MacKinnon (1978) for more details.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/511402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Posterior distribution of Normal Normal-inverse-Gamma Conjugacy Here is the setting:
The likelihood of data is
\begin{align}
p(\boldsymbol{x} | \mu, \sigma^2)
&= (\frac{1}{2\pi \sigma^2})^{\frac{n}{2}} exp\left\{ -\frac{1}{2\sigma^2} \sum\limits_{i=1}^n (x_i - \mu)^2 \right\} \nonumber \\
&= \frac{1}{(2\pi)^{n/2}} (\sigma^2)^{-n/2} exp\left\{ -\frac{1}{2\sigma^2} \left[ \sum\limits_{i=1}^n (x_i - \overline{x})^2 + n(\overline{x} - \mu)^2 \right] \right\}, \nonumber
\end{align}
and we use the Normal-inverse-Gamma as prior
\begin{align}
p(\mu , \sigma^2)
&= \mathcal{N} (\mu | \mu_0 , \sigma^2 V_0) IG(\sigma^2 | \alpha_0 , b_0 ) \nonumber \\
&= \frac{1}{\sqrt{2\pi V_0}} \frac{b_0^{\alpha_0}}{\Gamma(\alpha_0)}\frac{1}{\sigma} (\sigma^2)^{-\alpha_0 - 1} exp\left( -\frac{1}{2\sigma^2} [V_0^{-1}(\mu - \mu_0 )^2 + 2b_0] \right). \nonumber
\end{align}
Then, the posterior can be derivated via
\begin{align}
p(\mu , \sigma^2 | \boldsymbol{x})
&\propto p(\boldsymbol{x} | \mu , \sigma^2 ) p(\mu , \sigma^2) \nonumber \\
&\propto \left[ (\sigma^2)^{-n/2} exp \left( -\frac{1}{2\sigma^2} \big[\sum\limits_{i=1}^b (x_i - \overline{x})^2 + n(\overline{x} - \mu )^2\big] \right) \right] \times \left[ \sigma^{-1} (\sigma^2)^{-\alpha_0 - 1} exp \left( -\frac{1}{2\sigma^2} \left[ V_0^{-1} (\mu - \mu_0 )^2 + 2 b_0 \right] \right) \right] \nonumber \\
&= \sigma^{-1} (\sigma^2)^{-(\alpha_0 + \frac{n}{2}) - 1} exp \left( -\frac{1}{2\sigma^2} \big[ V_0^{-1} (\mu - m_0 )^2 + 2 b_0 + \sum\limits_{i=1}^n (x_i - \overline{x})^2 + n(\overline{x} - \mu)^2 \big] \right) \nonumber \\
&= \sigma^{-1} (\sigma^2)^{-(\alpha_0 + \frac{n}{2}) - 1} exp \Big\{ -\frac{1}{2\sigma^2} \Big[ (V_0^{-1} + n)(\mu - \frac{V_0^{-1} m_0 + n\overline{x}}{V_0^{-1} + n})^2 + \big(b_0 + \frac{1}{2}\sum\limits_{i=1}^n (x_i - \overline{x})^2 + \frac{V_0^{-1} n}{2(V_0^{-1} + n)} (m_0 - \overline{x})^2 \big) \Big] \Big\} \nonumber
\end{align}
We recognize this is an unnormalized Normal-inverse-Gamma distribution, therefore
\begin{align}
p(\mu , \sigma^2 | \boldsymbol{x}) = NIG(\mu , \sigma^2 | m_n , V_n , \alpha_n , b_n ), \nonumber
\end{align}
where
\begin{align}
m_n &= \frac{V_0^{-1} m_0 + n \overline{x}}{V_0^{-1} + n} \nonumber \\
V_n^{-1} &= V_0^{-1} + n \nonumber \\
\alpha_n &= \alpha_0 + \frac{n}{2} \nonumber \\
b_n &= b_0 + \frac{1}{2}\sum\limits_{i=1}^n (x_i - \overline{x})^2 + \frac{V_0^{-1} n}{2(V_0^{-1} + n)}(m_0 - \overline{x})^2. \nonumber
\end{align}
As indicated in this paper (see Eq(200)), the last term can be further expressed as
\begin{align}
b_n &= b_0 + \frac{1}{2} \left[ m_0^2 V_0^{-1} + \sum\limits_{i=1}^n x_i^2 - m_n^2 V_n^{-1} \right]. \nonumber
\end{align}
But I fail to prove it, i.e.,
\begin{align}
\sum\limits_{i=1}^n (x_i - \overline{x})^2 + \frac{V_0^{-1} n}{(V_0^{-1} + n)}(m_0 - \overline{x})^2 &= \left[ m_0^2 V_0^{-1} + \sum\limits_{i=1}^n x_i^2 - m_n^2 V_n^{-1} \right]. \nonumber
\end{align}
| This is much simpler to prove compared with your earlier question.
\begin{align}
\sum\limits_{i=1}^n (x_i - \overline{x})^2 &+ \frac{V_0^{-1} n}{(V_0^{-1} + n)}(m_0 - \overline{x})^2 = \sum\limits_{i=1}^n x_i^2 - n\overline{x}^2\\
&\qquad + \frac{V_0^{-1} n}{(V_0^{-1} + n)}(m_0^2 - 2 m_0\overline{x} + \overline{x}^2)\\
&= \sum\limits_{i=1}^n x_i^2 - n\overline{x}^2 + \frac{V_0^{-1} (n+V_0^{-1}-V_0^{-1})}{(V_0^{-1} + n)}m_0^2\\
&\quad-2\frac{V_0^{-1} nm_0\overline{x}}{(V_0^{-1} + n)}
+\frac{(V_0^{-1}+n-n) n}{(V_0^{-1} + n)}\overline{x}^2\\
&= \sum\limits_{i=1}^n x_i^2 + V_0^{-1}m_0^2 - \frac{V_0^{-2}m_0^2}{(V_0^{-1} + n)}\\
&\quad -2\frac{V_0^{-1} m_0n\overline{x}}{(V_0^{-1} + n)}
-\frac{n^2\overline{x}^2}{(V_0^{-1} + n)}\\
&= V_0^{-1}m_0^2 + \sum\limits_{i=1}^n x_i^2 - \frac{(n\overline x+V_0^{-1} m_0)^2}{V_n^{-1}}\\
&= V_0^{-1}m_0^2 + \sum\limits_{i=1}^n x_i^2 -V_n^{-1}m_n^2
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/512681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Derivation of F-distribution from inverse Chi-square? I am trying to derive F-distribution from Chi-square and inverse Chi-square. Somewhere in process I make a mistake and result slightly differs from the canonical form of Fisher-Snedecor F distribution. Please, help find it.
In order to derive p.d.f. of F-distribution, let us substitute the p.d.f. of chi-square and inverse chi-square distributions into F-distribution probability density function and integrate it over all possible values of $\chi^2_n=t$, such that $\frac{\chi_n^2}{\chi_m^2} = x$:
$f_{\frac{\chi_n^2}{\chi_m^2}}(x) = \int \limits_{t=0}^{\infty} f_{\chi^2_n}(t) f_{\frac{1}{\chi^2_m}}(\frac{x}{t})dt = \int \limits_{t=0}^{\infty} \frac{t^{n/2-1}e^{-t/2}}{2^{n/2}\Gamma(n/2)} \frac{{\frac{t}{x}}^{m/2+1}e^{-\frac{t}{2x}}}{2^{m/2}\Gamma(m/2)}dt = $
$ = \frac{1}{\Gamma(n/2)\Gamma(m/2) 2^{\frac{m+n}{2}} x^{m/2+1}} \int \limits_{t=0}^{\infty}t^{\frac{n+m}{2}}e^{-(t+\frac{t}{x})/2}dt = \frac{1}{\Gamma(n/2)\Gamma(m/2) 2^{\frac{m+n}{2}} x^{m/2+1}} \int \limits_{t=0}^{\infty}t^{\frac{n+m}{2}}e^{-\frac{t}{2}(1+\frac{1}{x})}dt$.
We aim to convert our integral into a gamma-function $\Gamma(n) = \int \limits_{0}^{\infty} z^{n-1}e^{-z}dz$.
In order to do that we shall perform a variable substitution $z = \frac{x+1}{x}\frac{t}{2}$, hence, $t = \frac{2x}{x+1}z$. Our integral then will take form of a gamma-function:
$\int \limits_{t=0}^{\infty}t^{\frac{n+m}{2}}e^{-\frac{t}{2}(1+\frac{1}{x})}dt = \int \limits_{z=0}^{\infty} (\frac{2zx}{x+1})^{\frac{n+m}{2}} e^{-z} \frac{2x}{x+1} dz = (\frac{x}{x+1})^{\frac{n+m}{2}+1} \cdot 2^{\frac{n+m}{2}+1} \cdot \int \limits_{z=0}^{\infty} z^{\frac{n+m}{2}}e^{-z}dz = \frac{x}{x+1}^{\frac{n+m}{2}+1} 2^{\frac{n+m}{2}+1} \Gamma(\frac{n+m}{2}+1)$
Substituting it into the expression for p.d.f., we get:
$f_{\frac{\chi^2_n}{\chi^2_m}}(x) = \frac{\Gamma(\frac{n+m}{2}+1)}{\Gamma(n/2)\Gamma(m/2)} \frac{2^{\frac{n+m}{2}+1}}{2^{\frac{m+n}{2}}} (\frac{x}{x+1})^{\frac{n+m}{2}+1} \frac{1}{x^{\frac{m}{2}+1}} = \frac{2\Gamma(\frac{n+m}{2}+1)}{\Gamma(n/2)\Gamma(m/2)} \frac{x^{\frac{n}{2}}}{(x+1)^{\frac{n+m}{2}+1}} = \frac{\Gamma(\frac{m+n}{2})}{\Gamma(\frac{m}{2}) \Gamma(\frac{n}{2})} \frac{x^{\frac{n}{2}-1}}{(x+1)^{\frac{n+m}{2}}} \frac{2x}{\frac{n+m}{2}(x+1)}$.
As you can see the result differs from the canonical p.d.f. of F distribution $\frac{\Gamma(\frac{m+n}{2})}{\Gamma(\frac{m}{2}) \Gamma(\frac{n}{2})} \frac{x^{\frac{n}{2}-1}}{(x+1)^{\frac{n+m}{2}}}$ by the multiplier $\frac{2x}{\frac{n+m}{2}(x+1)}$. Can you point to a mistake in this derivation?
| In
$$f_{\frac{\chi_n^2}{\chi_m^2}}(x) = \int \limits_{t=0}^{\infty} f_{\chi^2_n}(t) f_{\frac{1}{\chi^2_m}}(\frac{x}{t})dt$$
the Jacobian term is missing.
Indeed, if $Z\sim\chi^2_n$ and $Y\sim\chi^{-2}_m$, and if $X=ZY$, the joint density of $(Z,X)$ is
$$f_{\chi^2_n}(z) f_{\chi^{-2}_m}(\frac{x}{z})\left|\frac{\text dy}{\text dx}\right|=f_{\chi^2_n}(z) f_{\chi^{-2}_m}(\frac{x}{z})\frac{1}{z}$$
and
\begin{align*}f_X(x) &= \int_0^\infty f_{\chi^2_n}(z) f_{\chi^{-2}_m}(\frac{x}{z})\frac{1}{z}\,\text dz\\
&= K_{n,m} \int_0^\infty z^{n/2-1}e^{-z/2}(x/z)^{-m/2-1}e^{-z/2x}\frac{1}{z}~\text dz\\
&= K_{n,m} x^{-m/2-1} \int_0^\infty z^{(m+n)/2-1}e^{-(1+x)z/2x}~\text dz\\
&= K_{n,m} x^{-m/2-1} \{(1+x)/x\}^{-(m+n)/2+1-1}\int_0^\infty \zeta^{(m+n)/2-1}e^{-\zeta/2}~\text d\zeta\\
&= K^\prime_{n,m}\,\dfrac{x^{n/2-1}}{(1+x)^{(m+n)/2}}\\
\end{align*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/531835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Cumulative incidence of X Suppose the joint survival function of the latent failure times for two competing risks, $X$ and $Y$, is $S(x,y)=(1-x)(1-y)(1+0.5xy)$, $0<x<1$, $0<y<1$. Find the cumulative incidence function of $X$?
I first solved the marginal cumulative distribution function of $X$: $(1-x)$. Then I tried to find the joint density function: $1.5-x-y+2xy$, but I am unable to determine how to properly integrate this to find the cumulative incidence function of $X$.
| For $0 \leq x \leq 1$ the cumulative incidence of $X$ is defined as
$$
\mathbb P \left (X \leq x, X \leq Y \right)
$$
To compute this probability we need to integrate the joint density of $(X,Y)$,
$$
f(x,y) = \frac{3}{2} -x-y+2xy
$$
over the set $\mathcal{A} \equiv \{(u,v) \in [0,1]^2 \mid u \leq x \wedge u \leq v \} $
That is,
\begin{align*}
\mathbb P \left (X \leq x , X \leq Y \right) &= \int_\mathcal{A} f(u,v)\text{d}u\text{d}v \\
&= \int_0^x \left( \int_u^1 f(u,v)\text{d}v \right )\text{d}u \\
&= \int_0^x \left( \int_u^1 \left(\frac{3}{2} -u-v+2uv \right)\text{d}v \right )\text{d}u \\
&= \int_0^x \left( \frac{3}{2}(1-u) -u+u^2 - \left[ \frac{v^2}{2} \right]_u^1 +2u \left[ \frac{v^2}{2} \right]_u^1 \right) \text{d}u\\
&=\int_0^x \left( \frac{3}{2} - \frac{3}{2} u -u+u^2 - \left( \frac{1}{2}-\frac{u^2}{2} \right) +2u \left( \frac{1}{2} - \frac{u^2}{2} \right) \right) \text{d}u\\
&=\int_0^x \left( 1- \frac{3u}{2} + \frac{3u^2}{2} -u^3\right) \text{d}u\\
&= \left[ u - \frac{3u^2}{4} + \frac{u^3}{3} - \frac{u^4}{4} \right ]_0^x \\
&= x - \frac{3x^2}{4} + \frac{x^3}{3} - \frac{x^4}{4} \\
&= \frac{12x -9x^2 + 6x^3 - 3x^4}{12}.
\end{align*}
A quick check, take $x=1$ we have
$$
\frac{12x -9x^2 + 6x^3 -3x^4}{12} = \frac{1}{2}
$$ and
$$
\mathbb P \left (X \leq 1, X \leq Y \right) \stackrel{(1)}{=} \mathbb P(X\leq Y) \stackrel{(2)}{=}\frac{1}{2}
$$
Here $(1)$ comes from the fact that since $X \leq 1$ with probability $1$, the event $\{X \leq 1 \cap X \leq Y\}$ has the same probability as the event $\{X \leq Y\}$ and since $X$ and $Y$ are somehow "symmetric" we have $\mathbb P(X\leq Y) = \mathbb P(Y\leq X) = \frac{1}{2}$ hence equality $(2)$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/550004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Non-IID Uniform Distribution $A$ is uniform (0, 2) and $B$ is uniform(1, 3). Find the Cov$(W, Z)$, where $W=\min(A,B)$ and $Z=\max(A,B).$
Since $WX = AB,$ then by independence of $A$ and $B$, $E(WZ) = E(A)E(B),$ so that
$$Cov(WZ) = E(A) E (B) - E (W) E (Z) = (1)(2) - E (W) E (Z)$$
It suffices to find E(W) and E(Z) which I have
\begin{align*}
F_{W}(w) = P(W\le w) = 1- [(1-P(A< w) )(1-P(B< w ))]
= 1- \left[ \left( 1 - \frac{w}{2}\right) \left( 1 - \frac{w-1}{2}\right) \right]
= 1 - \left[ \left( \frac{2-w}{2}\right) \left(\frac{3-w}{2}\right) \right].
\end{align*}
Then $f_W(w) = -\frac{2w}{4} - \frac{5}{4}$ which is negative violating nonnegativity of PDF. Where did I go wrong with the computations?
\begin{align*}
E[W] = E_W (A\le B) + E_W (A> B) = \int_0^1\int_u^3 \times PDF(W) da \ db +\int_0^1\int_1^u PDF(W) db \ da
\end{align*}
Assuming we found a correct PDF how to proceed with the expectation? More especially the bounds on the integration?
|
\begin{align*}
F_{W}(w) = P(W\le w) = 1- [(1-P(A< w) )(1-P(B< w ))]
= 1- \left[ \left( 1 - \frac{w}{2}\right) \left( 1 - \frac{w-1}{2}\right) \right]
= 1 - \left[ \left( \frac{2-u}{2}\right) \left(\frac{3-u}{2}\right) \right].
\end{align*}
Why does the notation $w$ change to $u$?
And the PDF should be
$$
1 - F_W(w) = P(A > w, B > w) =
\begin{cases}
\frac{2 - w}{2}, & 0 < w < 1\\
\frac{2-w}2\frac{3-w}{2}, & 1 \le w < 2 \\
\end{cases}
$$
For any non negative random variable Y,
$$
EY = \int_0^\infty (1 - F_Y(y)) dy
$$
Supposing the PDF of $W$ I calculated is correct,
$$
EW = \int_0^1 \frac{2-w}{2} dw + \int_1^2 \frac{(2-w)(3-w)}{4} dw = \frac{23}{24}
$$
Note that $W + Z = A + B$, $EZ = EA + EB - EW = \frac{49}{24}$.
update:
The support of $W = \min(A, B)$ can be easily derived by noting that $W \le A$ and $W \le B$. (try thinking about the support of $Z$)
As for the PDF of $W$,
$$
1 - F_W(w) = P(W > w) = P(A > w) P(B > w)
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/603510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the P(A|C) if we know B depends on A and C depends on B? Given a Bayesian network that looks like the following:
A->B->C
How do we compute P(A|C)? My initial guess would be:
P(A|C) = P(A|B) * P(B|C) + P(A|not B) * P(not B|C)
| I would prefer $\Pr(A|C) = \Pr(A|C,B) \Pr(B|C) + \Pr(A|C, \text{not } B) \Pr(\text{not } B|C)$ and the following counterexample shows why there is a difference.
Prob A B C
0.1 T T T
0.1 F T T
0.1 T F T
0.2 F F T
0.2 T T F
0.1 F T F
0.1 T F F
0.1 F F F
Then in your formulation $\Pr(A|C)=\frac{2}{5}$, $\Pr(A|B)=\frac{3}{5}$, $\Pr(B|C)=\frac{2}{5}$, $\Pr(A|\text{not } B)= \frac{2}{5}$, $\Pr(\text{not } B|C)= \frac{3}{5}=6$ but $\frac{2}{5} \not = \frac{3}{5} \times \frac{2}{5} + \frac{2}{5} \times \frac{3}{5}$.
In my formulation $\Pr(A|C,B) = \frac{1}{2}$ and $\Pr(A|C, \text{not } B)=\frac{1}{3}$ and we have the equality $\frac{2}{5} = \frac{1}{2} \times \frac{2}{5} + \frac{1}{3} \times \frac{3}{5}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/19024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Distribution of $XY$ if $X \sim$ Beta$(1,K-1)$ and $Y \sim$ chi-squared with $2K$ degrees Suppose that $X$ has the beta distribution Beta$(1,K-1)$ and $Y$ follows a chi-squared with $2K$ degrees. In addition, we assume that $X$ and $Y$ are independent.
What is the distribution of the product $Z=XY$ .
Update
My attempt:
\begin{align}
f_Z &= \int_{y=-\infty}^{y=+\infty}\frac{1}{|y|}f_Y(y) f_X \left (\frac{z}{y} \right ) dy \\ &= \int_{0}^{+\infty} \frac{1}{B(1,K-1)2^K \Gamma(K)} \frac{1}{y} y^{K-1} e^{-y/2} (1-z/y)^{K-2} dy \\ &= \frac{1}{B(1,K-1)2^K \Gamma(K)}\int_{0}^{+\infty} e^{-y/2} (y-z)^{K-2} dy \\ &=\frac{1}{B(1,K-1)2^K \Gamma(K)} [-2^{K-1}e^{-z/2}\Gamma(K-1,\frac{y-z}{2})]_0^\infty \\ &= \frac{2^{K-1}}{B(1,K-1)2^K \Gamma(K)} e^{-z/2} \Gamma(K-1,-z/2)
\end{align}
Is it correct? if yes, how we call this distribution?
| After some valuable remarks, I was able to find the solution:
We have $f_X(x)=\frac{1}{B(1,K-1)} (1-x)^{K-2}$ and $f_Y(y)=\frac{1}{2^K \Gamma(K)} y^{K-1} e^{-y/2}$.
Also, we have $0\le x\le 1$. Thus, if $x=\frac{z}{y}$, we get $0 \le \frac{z}{y} \le 1$ which implies that $z\le y \le \infty$.
Hence:
\begin{align}
f_Z &= \int_{y=-\infty}^{y=+\infty}\frac{1}{|y|}f_Y(y) f_X \left (\frac{z}{y} \right ) dy \\ &= \int_{z}^{+\infty} \frac{1}{B(1,K-1)2^K \Gamma(K)} \frac{1}{y} y^{K-1} e^{-y/2} (1-z/y)^{K-2} dy \\ &= \frac{1}{B(1,K-1)2^K \Gamma(K)}\int_{z}^{+\infty} e^{-y/2} (y-z)^{K-2} dy \\ &=\frac{1}{B(1,K-1)2^K \Gamma(K)} \left[-2^{K-1}e^{-z/2}\Gamma(K-1,\frac{y-z}{2})\right]_z^\infty \\ &= \frac{2^{K-1}}{B(1,K-1)2^K \Gamma(K)} e^{-z/2} \Gamma(K-1) \\ &= \frac{1}{2} e^{-z/2}
\end{align}
where the last equality holds since $B(1,K-1)=\frac{\Gamma(1)\Gamma(K-1)}{\Gamma(K)}$.
So $Z$ follows an exponential distribution of parameter $\frac{1}{2}$; or equivalently, $Z \sim\chi_2^2$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/183574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
} |
The expected long run proportion of time the chain spends at $a$ , given that it starts at $c$ Consider the transition matrix:
$\begin{bmatrix} \frac{1}{5} & \frac{4}{5} & 0 & 0 & 0 \\
\frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\ \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} & \frac{1}{5} \\ 0 & \frac{1}{3} & \frac{1}{3} & \frac{1}{3} & 0 \\ 0 & 0 & 0 & 0 & 1 \end{bmatrix}$
What is the expected long run proportion of time the chain spends at
$a$, given that it starts at $b$.
I know that I must use the stationary distributions of each $\pi(j)$ in question. Since $a$ and $b$ only communicate with each other, I get the system of simulataneous equations:
*
*$\pi(a) = \frac{1}{2} \cdot \pi(b) + \frac{1}{5} \cdot \pi(a)$
*$\pi(b) = \frac{4}{5} \cdot \pi(a) + \frac{1}{2} \cdot \pi(b)$
with these I am getting a distribution $\pi = (\frac{5}{13}, \frac{8}{13})$, Is this correct?
if the distribution started at $c$, as in the title of the post, would my equations now be 3 simulatenous equations which look like:
*
*$\pi(a) = \frac{1}{5} \cdot \pi(a) + \frac{1}{2} \cdot \pi(b) + \frac{1}{5} \cdot \pi(c)$
*$\pi(a) = \frac{4}{5} \cdot \pi(a) + \frac{1}{2} \cdot \pi(b) + \frac{1}{5} \cdot \pi(c)$
*$\pi(c) = \frac{1}{5} \cdot \pi(c)$
I am uncertain about the last equation. What I am confused about is $c$ leads to every state, but if I include all of them then I will have a system of 6 equations. Since the question is asking specifically about $a$ which can only be reached by states $a,b,c$, shouldn't we only be considering the equations I wrote?
|
What is the expected long run proportion of time the chain spends at $a$, given that it starts at $b$?
This exercise, technically, asks for the limiting probability value $\ell_b(a)$. You can note that the limiting distribution $\ell_b= \left(\frac{5}{13}, \frac{8}{13}, 0, 0, 0\right)$ that you correctly evaluated is also a stationary distribution of the given matrix -- a limiting distribution will always be stationary. That matrix although is a reducible one and so it can have more than one stationary distribution. In fact a second one is $(0, 0, 0, 0, 1)$.
Now, the question that you made in the title is about the limiting distribution $\ell_c$, and, of course, specifically about its first value $\ell_c(a)$:
$$\ell_c(a) = \lim_{n \to \infty}P(X_n=a | X_0= c) = \frac{5}{7}\frac{5}{13} = \frac{25}{91}$$
If I didn't get it wrong this is a self-study question, so I will leave to you to find the middle steps of this solution. Consider that $\ell_c$ has non-zero values only for $a$, $b$ and $e$, it is, in fact, a weighted sum of the two stationary distributions above (and therefore, it is a stationary distribution as well).
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/262912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Full-Rank design matrix from overdetermined linear model I'm trying to create a full-rank design matrix X for a randomized block design model starting from something like the example from page 3/8 of this paper (Wayback Machine) .
It's been suggested that I can go about this by eliminating one of each of the columns for treatment (column 5) and block (column 8) as shown in the example on the page marked 45 in this paper.
I'm not sure how this results in a full-rank design matrix though, for a 12x8 matrix wouldn't making it 12x6 mean that by definition there are still linear dependencies?
My understanding of the definition of a full-rank matrix would be that there are no linear dependencies, which would require a matrix to be square among other criteria. Perhaps this is a misconception on my part?
The goal is to end up with an invertible matrix from $X^T X$ which I can use to determine the least squares, I'm not sure if that's relevant to the issue.
| The matrix $\mathbf{X}^\text{T} \mathbf{X}$ is called the Gramian matrix of the design matrix $\mathbf{X}$. It is invertible if and only if the columns of the design matrix are linearly independent ---i.e., if and only if the design matrix has full rank (see e.g., here and here). (So yes, these two things are closely related, as you suspected.) Before considering why removing columns gives you full rank, it is useful to see why the present design matrix is not of full rank. In its current form, you have the linear dependencies:
$$\begin{equation} \begin{aligned}
\text{col}_5(\mathbf{X}) &= \text{col}_1(\mathbf{X}) - \text{col}_2(\mathbf{X}) - \text{col}_3(\mathbf{X}) - \text{col}_4(\mathbf{X}), \\[6pt]
\text{col}_8(\mathbf{X}) &= \text{col}_1(\mathbf{X}) - \text{col}_6(\mathbf{X}) - \text{col}_7(\mathbf{X}). \\[6pt]
\end{aligned} \end{equation}$$
If you remove columns $5$ and $8$ you remove these linear dependencies, and it turns out that there are no linear dependencies remaining. (If you're not sure, just try to find one.) To confirm this we can look at the reduced design matrix and :
$$\mathbf{X}_- = \begin{bmatrix}
1 & 1 & 0 & 0 & 1 & 0 \\
1 & 1 & 0 & 0 & 0 & 1 \\
1 & 1 & 0 & 0 & 0 & 0 \\
1 & 0 & 1 & 0 & 1 & 0 \\
1 & 0 & 1 & 0 & 0 & 1 \\
1 & 0 & 1 & 0 & 0 & 0 \\
1 & 0 & 0 & 1 & 1 & 0 \\
1 & 0 & 0 & 1 & 0 & 1 \\
1 & 0 & 0 & 1 & 0 & 0 \\
1 & 0 & 0 & 0 & 1 & 0 \\
1 & 0 & 0 & 0 & 0 & 1 \\
1 & 0 & 0 & 0 & 0 & 0 \\
\end{bmatrix} \quad \quad \quad
\mathbf{X}_-^\text{T} \mathbf{X}_- = \begin{bmatrix}
12 & 3 & 3 & 3 & 4 & 4 \\
3 & 3 & 0 & 0 & 1 & 1 \\
3 & 0 & 3 & 0 & 1 & 1 \\
3 & 0 & 0 & 3 & 1 & 1 \\
4 & 1 & 1 & 1 & 4 & 0 \\
4 & 1 & 1 & 1 & 0 & 4 \\
\end{bmatrix}.$$
The Gram determinant of this reduced design matrix is $\det (\mathbf{X}_-^\text{T} \mathbf{X}_-) = 432 \neq 0$, so the reduced design matrix has linearly independent columns and is of full rank. The Gramian matrix for the reduced design matrix is invertible, with inverse:
$$(\mathbf{X}_-^\text{T} \mathbf{X}_-)^{-1} = \begin{bmatrix}
\tfrac{1}{2} & -\tfrac{1}{3} & -\tfrac{1}{3} & -\tfrac{1}{3} & -\tfrac{1}{4} & -\tfrac{1}{4} \\
-\tfrac{1}{3} & \tfrac{2}{3} & \tfrac{1}{3} & \tfrac{1}{3} & 0 & 0 \\
-\tfrac{1}{3} & \tfrac{1}{3} & \tfrac{2}{3} & \tfrac{1}{3} & 0 & 0 \\
-\tfrac{1}{3} & \tfrac{1}{3} & \tfrac{1}{3} & \tfrac{2}{3} & 0 & 0 \\
-\tfrac{1}{4} & 0 & 0 & 0 & \tfrac{1}{2} & \tfrac{1}{4} \\
-\tfrac{1}{4} & 0 & 0 & 0 & \tfrac{1}{4} & \tfrac{1}{2} \\
\end{bmatrix}.$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/314022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to show this matrix is positive semidefinite? Let
$$K=\begin{pmatrix}
K_{11} & K_{12}\\
K_{21} & K_{22}
\end{pmatrix}$$
be a symmetric positive semidefinite real matrix (PSD) with $K_{12}=K_{21}^T$. Then, for $|r| \le 1$,
$$K^*=\begin{pmatrix}
K_{11} & rK_{12}\\
rK_{21} & K_{22}
\end{pmatrix}$$
is also a PSD matrix. Matrices $K$ and $K^*$ are $2 \times 2$ and $K_{21}^T$ denotes the transpose matrix. How do I prove this?
| There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems.
*
*For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD
*For $A$ - PSD and $B$ - PSD also $A + B$ - PSD
*For $A$ - PSD and $q > 0$ also $qA$ - PSD
And now:
\begin{align*}
K^* &=
\begin{pmatrix}
K_{1,1} & rK_{1,2} \\
rK_{2,1} & K_{2,2} \\
\end{pmatrix} \\
&=
\begin{pmatrix}
K_{1,1} & rK_{1,2} \\
rK_{2,1} & r^2K_{2,2} \\
\end{pmatrix}
+
\begin{pmatrix}
0 & 0 \\
0 & qK_{2,2} \\
\end{pmatrix}, \text{ where $q = 1 - r^2 > 0$} \\
&=
\begin{pmatrix}
I & 0 \\
0 & rI \\
\end{pmatrix}^T
\begin{pmatrix}
K_{1,1} & K_{1,2} \\
K_{2,1} & K_{2,2} \\
\end{pmatrix}
\begin{pmatrix}
I & 0 \\
0 & rI \\
\end{pmatrix}
+
q\begin{pmatrix}
0 & 0 \\
0 & K_{2,2} \\
\end{pmatrix}
\end{align*}
Matrix $K$ is PSD by definition and so is its submatrix $K_{2, 2}$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/322207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Variance of X+Y+XY? Assuming that random variables X and Y are independent, what is $\displaystyle Var((1+X)(1+Y)-1)=Var(X+Y+XY)$?
Should I start as follows
\begin{equation}
Var((1+X)(1+Y)-1)\\
=Var((1+X)(1+Y))\\
=(E[(1+X)])^2 Var(1+Y)+(E[(1+Y])^2 Var(1+X)+Var(1+X)Var(1+Y)
\end{equation}
or maybe as follows
\begin{equation}
\\
Var((1+X)(1+Y)-1)\\
=Var(1+Y+X+XY-1)\\
=Var(X+Y+XY)\\
=Var(X)+Var(Y)+Var(XY)+2Cov(X,Y)+2Cov(X,XY)+2Cov(Y,XY)
\end{equation}
I'm considering could I express the problem in terms of covariances (and variances) between individual random variables. I would like to forecast the variance by individual covariances in my model if its possible. Does the solution simplify if expected values of the variables are zero?
Edit:
Moving on from the first alternative
\begin{equation}
=(E[(1+X)])^2 Var(1+Y)+(E[(1+Y])^2 Var(1+X)+Var(1+X)Var(1+Y)\\
=(E[(1+X)])^2 Var(Y)+(E[(1+Y])^2 Var(X)+Var(X)Var(Y)\\
=(1+E[X])^2 Var(Y)+(1+E[Y])^2 Var(X)+Var(X)Var(Y)\\
\text{ }\\
\text{if E[X] = 0 and E[Y] = 0, then }\\
=Var(Y) + Var(X) + Var(X)Var(Y)\\
\text{ }\\
\end{equation}
| For independent random variables $X$ and $Y$ with means $\mu_X$ and $\mu_Y$ respectively, and variances $\sigma_X^2$ and $\sigma_Y^2$ respectively,
\begin{align}\require{cancel}
\operatorname{var}(X+Y+XY) &= \operatorname{var}(X)+\operatorname{var}(Y)+\operatorname{var}(XY)\\
&\quad +2\cancelto{0}{\operatorname{cov}(X,Y)}+2\operatorname{cov}(X,XY)+\operatorname{cov}(Y,XY)\\
&=\sigma_X^2+\sigma_Y^2+\big(\sigma_X^2\sigma_Y^2+\sigma_X^2\mu_Y^2+\sigma_Y^2\mu_X^2\big)\\
&\quad +2\operatorname{cov}(X,XY)+\operatorname{cov}(Y,XY).
\end{align}
Now,
\begin{align}
\operatorname{cov}(X,XY) &= E[X\cdot XY] - E[X]E[XY]\\
&=E[X^2Y]-E[X]\big(E[X]E[Y]\big)\\
&= E[X^2]E[Y]-\big(E[X]\big)^2E[Y]\\
&= \sigma_X^2\mu_Y
\end{align}
and similarly, $\operatorname{cov}(Y,XY) = \sigma_Y^2 \mu_X$.
Consequently,
\begin{align}\operatorname{var}(X+Y+XY) &=\sigma_X^2+\sigma_Y^2+\sigma_X^2\sigma_Y^2+\sigma_X^2\mu_Y^2+\sigma_Y^2\mu_X^2 +2\sigma_X^2\mu_Y + 2\sigma_Y^2 \mu_X\\
&= \sigma_X^2\big(1 + \mu_Y^2 + 2\mu_Y\big) + \sigma_Y^2\big(1 + \mu_X^2 + 2\mu_X\big) + \sigma_X^2\sigma_Y^2\\
&= \sigma_X^2\big(1 + \mu_Y\big)^2 + \sigma_Y^2\big(1 + \mu_X\big)^2 + \sigma_X^2\sigma_Y^2.
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/323905",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A Pairwise Coupon Collector Problem This is a modified version of the coupon collector problem, where we are interested in making comparisons between the "coupons". There are a number of constraints which have been placed in order to make this applicable to the application of interest (not relevant here, but related to clustering).
*
*There exists $M$ unique coupons.
*Coupons come in packets of size $K$.
*Each packet contains $K$ unique units, sampled uniformly without replacement from the total set of $M$ units.
*The contents of a packet is independent of all other packets.
*All units in a packet are "compared" to all other units in the packet.
*Units may not be compared across packets.
Question 1. Let $X$ be the number of unique comparisons that have been made after $T$ packets have been acquired. What is the expected value and variance of $X$?
Question 2: Let $T_\star$ be the smallest number of packets required to make all of the $\binom{M}{2}$ comparisons. What is the expected value and variance of $T_\star$?
| A Solution for Question 1
\begin{align*}
E(X) &= \binom{M}{2}\left(1 - (1-p)^T\right) \\
V(X) &= \binom{M}{2}(1-p)^T\left(1 -\binom{M}{2}(1-p)^T\right) + 6\binom{M}{3}(1-q)^T + 6\binom{M}{4}(1-r)^T
\end{align*}
where
\begin{align*}
p &= \frac{K(K-1)}{M(M-1)} \\
q &= \frac{2\binom{M-2}{K-2} - \binom{M-3}{K-3}}{\binom{M}{K}} \\
r &= \frac{2\binom{M-2}{K-2} - \binom{M-4}{K-4}}{\binom{M}{K}}
\end{align*}
The following plot shows $E(X)$ and $E(X) - 2\sqrt{V(X)}$ as a function of $T$ for the case where $M=50$ and $K=10$.
Derivation of Results
Notation and Preliminaries
Let $A_{ij,t}$ be the event that units $i$ and $j$ are compared in packet $t$. It is easy to see that $P(A_{ij, t}) = \frac{K(K-1)}{M(M-1)}$. Now let $B_{ij} = \cap_{t=1}^TA^c_{ij,t}$ be the event that units $i$ and $j$ are not compared in any packet.
\begin{align*}
P(A_{ij}) &= P\left(\bigcap_{t=1}^TA^c_{ij,t}\right) \\
&= P(A^c_{ij,t})^T \\
&= \left(1 - \frac{K(K-1)}{M(M-1)}\right)^T
\end{align*}
Finally, let $X_{ij}$ be an indicator variable which equals $1$ when $B_{ij}^c$ holds and $0$ otherwise. Note that $$X_{ij} \sim \text{Bern}\left(1 -\left(1 - \frac{K(K-1)}{M(M-1)}\right)^T\right).$$ Then the total number of comparisons made can be denoted
$$X = \sum_{i < j}X_{ij}$$
Expected Value of $X$
By linearity of expectation we have
$$E(X) = E\left(\sum_{i < j}X_{ij}\right) = \sum_{i < j}E\left(X_{ij}\right) = \binom{M}{2}\left(1 -\left(1 - \frac{K(K-1)}{M(M-1)}\right)^T\right)$$
Variance of $X$
The variance is slightly tricker. We begin by finding $E(X^2)$. First note that
\begin{align*}
X^2 &= \left(\sum_{i < j}X_{ij}\right)^2 \\
&= \underbrace{\sum X_{ij}^2}_{\binom{M}{2} \text{ terms}} + \underbrace{\sum X_{ij}X_{kl}}_{\binom{M}{4}\binom{4}{2} \text{ terms}} + \underbrace{\sum X_{ij}X_{ik} + \sum X_{ij}X_{jk} + \sum X_{ik}X_{jk}}_{\binom{M}{3}\cdot 3 \cdot 2 \text{ terms}}
\end{align*}
We group the sum in this way so that the terms in each group have the same expected value. Note that the total number of terms is $\binom{M}{2} + 6\binom{M}{3} + 6\binom{M}{4} = \binom{M}{2}^2$, as expected. We will now look at each of the three cases individually.
Case One: $E(X_{ij}^2)$
Since $X_{ij}$ is binary, we have that $X_{ij}^2 = X_{ij}$, thus
$$E\left(\sum X_{ij}^2\right) = E(X) = \binom{M}{2}\left(1 -\left(1 - \frac{K(K-1)}{M(M-1)}\right)^T\right)$$
Case Two: $E\left(X_{ij}X_{kl}\right)$
Using standard facts about products of Bernoulli random variables, we have $E\left(X_{ij}X_{kl}\right) = P(X_{ij} = 1, X_{kl} = 1)$ where $i$, $j$, $k$ and $l$ are all distinct units. There are $\binom{M}{4}$ ways to choose these distinct units, and then $\binom{4}{2} = 6$ ways to choose a valid index assignment.
The event $\{X_{ij} = 1, X_{kl} = 1\}$ is equivalent to the event $B_{ij}^c \cap B_{kl}^c$.
\begin{align*}
E(X_{ij}X_{kl}) &= P(B_{ij}^c \cap B_{kl}^c) \\
&= 1 - P(B_{ij} \cup B_{kl}) && \text{DeMorgans Law} \\
&= 1 - \left[P(B_{ij} + P(B_{kl}) - P(B_{ij} \cap P(B_{kl})\right] \\
&= 1 - \left[2\left(1-\frac{K(K-1)}{M(M-1)}\right) - P\left(\bigcap_{t=1}^T\{B_{ij,t}\cap B_{kl,t}\}\right)\right] \\
&= 1 - 2\left(1-\frac{K(K-1)}{M(M-1)}\right) + P(B_{ij,1} \cap B_{kl, 1})^T && \text{independence across packets} \\
&= 1 - 2\left(1-\frac{K(K-1)}{M(M-1)}\right) + \left(1 - \frac{2\binom{M-2}{K-2} - \binom{M-4}{K-4}}{\binom{M}{K}} \right)^T
\end{align*}
Case Three: $E\left(X_{ij}X_{ik}\right)$
This case is very similar to the the previous case. There are $6\binom{M}{3}$ terms with this expected value because there are $\binom{M}{3}$ ways to choose three distinct units, $3$ ways to choose which unit is shared between both indicators and $2$ ways to assign the remaining indices. The probability calculation proceeds in the exact same way as case two, up until the last line which becomes.
$$E(X_{ij}X_{ik}) = 1 - 2\left(1-\frac{K(K-1)}{M(M-1)}\right) + \left(1 - \frac{2\binom{M-2}{K-2} - \binom{M-3}{K-3}}{\binom{M}{K}} \right)^T$$
Putting Everything Together
To simplify notation, lets define
\begin{align*}
p &= \frac{K(K-1)}{M(M-1)} \\
q &= \frac{2\binom{M-2}{K-2} - \binom{M-3}{K-3}}{\binom{M}{K}} \\
r &= \frac{2\binom{M-2}{K-2} - \binom{M-4}{K-4}}{\binom{M}{K}}
\end{align*}
Then we have
\begin{align*}
E(X^2) &= \binom{M}{2}\left(1 -(1 -p)^T\right) + \\
&\quad\quad 6\binom{M}{3}\left(1 - 2(1-p)^T + (1-q)^T\right) + \\
&\quad\quad 6\binom{M}{4}\left(1 - 2(1-p)^T + (1-r)^T\right) \\
&= \binom{M}{2}^2 - \left(\binom{M}{2} + 12\binom{M}{3} + 12\binom{M}{4}\right)(1-p)^T + \\
&\quad\quad 6\binom{M}{3}(1-q)^T + 6\binom{M}{4}(1-r)^T
\end{align*}
And the variance can be calculated using the identity $\text{Var}(X) = E(X^2) - E(X)^2$ which gives (after simplification)
$$\text{Var}(X) = \binom{M}{2}(1-p)^T\left(1 -\binom{M}{2}(1-p)^T\right) + 6\binom{M}{3}(1-q)^T + 6\binom{M}{4}(1-r)^T $$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/541014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculation of an "unconstrained" normal distribution (starting from a censored one) Assume that two r.v. $W$ and $Y|W=w$ with
(1) $W \sim \text{N}(\mu_w,\sigma_w^2)$ (iid)
(2) $Y|W=w \sim \text{N}(w,\sigma_y^2)$ (iid)
Further we only observe $Y$ if $Y$ is less then $W$, i.e.,
(3) $Y|Y\le W$
Goal: Find the pdf of the censored observations, i.e., of $Y|Y\le W$ and from that deduce the uncensored pdf and the first two moments (so i.m.h.o. we have to find$f_Y(y)$). The first two moments of this uncensored pdf are supposed to depend upon $E(Y|Y\le W)$ and $Var(Y|Y\le W)$.
By definition of conditional pdf we have that:
(4) $f_{Y|W}(y|W = w)= \frac{f_{Y,W}(y,w)}{f_W(w)}$
Next, the definition of a truncated density gives for a abitrary value of $W$:
(5) $ f_{Y|Y\le W}(y|y\le w) = \frac{f_Y(y)}{P(Y\le W)}$
I would simply rewrite (4) to
$f_{Y|W}(y|W = w)f_W(w) = f_{Y,W}(y,w)$
then integration over $f_{Y,W}(y,w)$ w.r.t $w$ should yield $f_Y(y)$, i.e.,
(a) $\int_{-\infty}^{\infty} f_{Y,W}(y,w) dw = \int_{-\infty}^{\infty} f_Y(y|W = w)f_W(w) dw = f_Y(y)$
Plugin in $f_Y(y)$ into (5), ($P(Y\le W)$ will also be given by $f_Y(y)$) I will se how the moments of $f_{Y|Y\le W}(y|y\le w)$ will look and how the moments of $f_Y(y)$ depend upon them.
So (a) will look like
$f_Y(y) = \int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi\sigma^2_y}}\text{exp}\big(-\frac{(y-w)^2}{2\sigma_y^2}\big)\frac{1}{\sqrt{2\pi\sigma^2_w}}\text{exp}\big(-\frac{(w-\mu_w)^2}{2\sigma_w^2}\big)dw$
Except for the $w$ in the first $\text{exp}$, this looks very easy but since there is a $w$ im a little bit stuck how to solve this...
| Ok. Let's do this, for CV's shake.
First compact by setting $C=\frac{1}{\sqrt{2\pi\sigma^2_y}}\frac{1}{\sqrt{2\pi\sigma^2_w}} = \frac{1}{2\pi\sigma_y\sigma_w}$, so
$$f_Y(y) =C \int_{-\infty}^{\infty}\exp\left\{-\frac{(y-w)^2}{2\sigma_y^2}\right\}\exp\left\{-\frac{(w-\mu_w)^2}{2\sigma_w^2}\right\}dw$$
We have
$$exp\left\{-\frac{(y-w)^2}{2\sigma_y^2}\right\}\exp\left\{-\frac{(w-\mu_w)^2}{2\sigma_w^2}\right\} =
\exp\left\{-\frac{y^2-2yw+w^2}{2\sigma_y^2}\right\}\exp\left\{-\frac{w^2-2w\mu_w+\mu_w^2}{2\sigma_w^2}\right\}
=\exp\left\{-\frac{y^2}{2\sigma_y^2}-\frac{\mu_w^2}{2\sigma_w^2}\right\} \exp\left\{-\frac{w^2}{2\sigma_y^2}-\frac{w^2}{2\sigma_w^2}\right\}\exp\left\{\frac{2yw}{2\sigma_y^2}+\frac{2w\mu_w}{2\sigma_w^2}\right\}$$
Setting $s^2\equiv \sigma_y^2+\sigma_w^2$ we arrive at
$$=\exp\left\{-\frac{y^2}{2\sigma_y^2}-\frac{\mu_w^2}{2\sigma_w^2}\right\} \exp\left\{-\frac{s^2}{2\sigma_y^2\sigma_w^2}w^2\right\}\exp\left\{\frac{\sigma_w^2y+\sigma_y^2\mu_w}{\sigma_y^2\sigma_w^2}w\right\}$$
Include the first $\exp$ in the constant, $C^*=C \exp\left\{-\frac{y^2}{2\sigma_y^2}-\frac{\mu_w^2}{2\sigma_w^2}\right\}$.
Set
$$\beta\equiv \frac{s^2}{2\sigma_y^2\sigma_w^2},\qquad \alpha\equiv \frac{\sigma_w^2y+\sigma_y^2\mu_w}{\sigma_y^2\sigma_w^2}$$ to obtain
$$f_Y(y) =C^* \int_{-\infty}^{\infty}e^{-\beta w^2+\alpha w}dw=C^*\left[ \int_{-\infty}^{0}e^{-\beta w^2+\alpha w}dw + \int_{0}^{\infty}e^{-\beta w^2+\alpha w}dw\right]$$
$$=C^* \int_{0}^{\infty}e^{-\beta w^2}\left[e^{-\alpha w}+e^{\alpha w}\right]dw =2C^* \int_{0}^{\infty}e^{-\beta w^2}\operatorname{cosh}(\alpha w)dw$$
where $\operatorname{cosh}$ is the hyperbolic cosine.
Using a formula provided in Gradshteyn & Ryzhik (2007), "Table of Integrals, Series and Products", 7th ed., p. 384, eq. 3.546(2) we have
$$f_Y(y)=2C^*\frac 12 \sqrt {\frac {\pi}{\beta}} \exp\left\{\frac {\alpha^2}{4\beta}\right\}$$
Now $$\frac {\alpha^2}{4\beta} = \frac {\left(\frac{\sigma_w^2y+\sigma_y^2\mu_w}{\sigma_y^2\sigma_w^2}\right)^2}{4\frac{s^2}{2\sigma_y^2\sigma_w^2}} = \frac {(\sigma_w^2y+\sigma_y^2\mu_w)^2}{2\sigma_y^2\sigma_w^2s^2}$$
and bringing back in $C^*$ (and $\beta$) in all its glory we have
$$f_Y(y)=\frac{1}{2\pi\sigma_y\sigma_w}\exp\left\{-\frac{y^2}{2\sigma_y^2}-\frac{2\mu_w^2}{\sigma_w^2}\right\}\sqrt{\pi} \left(\sqrt {\frac{s^2}{2\sigma_y^2\sigma_w^2}}\right)^{-1} \exp\left\{\frac {(\sigma_w^2y+\sigma_y^2\mu_w)^2}{2\sigma_y^2\sigma_w^2s^2}\right\} $$
The constant terms simplify to
$$\frac{1}{2\pi\sigma_y\sigma_w}\sqrt{\pi} \left(\sqrt {\frac{s^2}{2\sigma_y^2\sigma_w^2}}\right)^{-1} = \frac{1}{s\sqrt{2\pi}} $$
and, the exponentials end up in the normal exponential. So in the end
$$f_Y(y) = \frac{1}{s\sqrt{2\pi}}\exp\left\{-\frac{(y-\mu_w)^2}{2s^2}\right\}= N(\mu_w, s^2),\qquad s^2\equiv \sigma_y^2+\sigma_w^2$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/73157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that the value is, indeed, the MLE Let $ X_1, ... X_n$ i.i.d with pdf
$$f(x;\theta)=\frac{x+1}{\theta(\theta+1)}\exp(-x/\theta), x>0, \theta >0$$
It is asked to find the MLE estimator for $\theta.$
The likelihood function is given by
$$L(\theta;x)=[\theta(1-\theta)]^{-n}\exp\left(\frac{\sum_i x_i}{\theta}\right)\prod_i (x_i+1)I_{(0,\infty)}(x_i)$$
Then, the derivative of log-likelihood will be
$$\frac{dlogL(\theta;x)}{d\theta}=\frac{-(2\theta+1)n}{\theta(1+\theta)} + \frac{\sum X_i}{\theta^2}$$
I've obtained my candidate to MLE. But, doing the second derivative of the log-likelihood, I could not conclude that it is negative so that the candidate is, indeed, the point of maximum. What should I do, in this case?
| Removing the multiplicative constants that do not depend on $\theta$, the likelihood function in this case is:
$$\begin{equation} \begin{aligned}
L_\mathbf{x}(\theta)
&= \prod_{i=1}^n \frac{1}{\theta (\theta+1)} \cdot \exp \Big( - \frac{x_i}{\theta} \Big) \\[6pt]
&= \frac{1}{\theta^n (\theta+1)^n} \cdot \exp \Big( - \frac{n \bar{x}}{\theta} \Big). \\[6pt]
\end{aligned} \end{equation}$$
Thus, the log-likelihood function is:
$$\ell_\mathbf{x}(\theta) \equiv \ln L_\mathbf{x}(\theta) = -n \Bigg[ \ln(\theta) + \ln(\theta+1) + \frac{\bar{x}}{\theta} \Bigg] \quad \quad \quad \text{for all } \theta >0.$$
I will write the derivatives of this function out in a form that is useful for finding the critical points and then finding the second derivative at those critical points:
$$\begin{equation} \begin{aligned}
\frac{d \ell_\mathbf{x}}{d \theta}(\theta)
&= -n \Bigg[ \frac{1}{\theta} + \frac{1}{\theta+1} - \frac{\bar{x}}{\theta^2} \Bigg] \\[6pt]
&= -n \Bigg[ \frac{2\theta + 1}{\theta(\theta+1)} - \frac{\bar{x}}{\theta^2} \Bigg] \\[6pt]
&= - \frac{n}{\theta^2} \Bigg[ \frac{(2\theta + 1)\theta}{\theta+1} - \bar{x} \Bigg], \\[6pt]
\frac{d^2 \ell_\mathbf{x}}{d \theta^2}(\theta)
&= -n \Bigg[ - \frac{1}{\theta^2} - \frac{1}{(\theta+1)^2} + \frac{2\bar{x}}{\theta^3} \Bigg] \\[6pt]
&= -n \Bigg[ - \frac{(\theta+1)^2 + \theta^2}{\theta^2 (\theta+1)^2} + \frac{2\bar{x}}{\theta^3} \Bigg] \\[6pt]
&= -n \Bigg[ - \frac{2 \theta^2 + 2 \theta + 1}{\theta^2 (\theta+1)^2} + \frac{2\bar{x}}{\theta^3} \Bigg] \\[6pt]
&= -n \Bigg[ \frac{(2\theta+1)(\theta+1) - \theta}{\theta^2 (\theta+1)^2} - \frac{2\bar{x}}{\theta^3} \Bigg] \\[6pt]
&= \frac{n}{\theta^3} \Bigg[ \frac{[(2\theta+1) - \theta]\theta}{\theta+1} - 2\bar{x} \Bigg] \\[6pt]
&= \frac{n}{\theta^3} \Bigg[ \frac{(2\theta+1) \theta}{\theta+1} - \bar{x} - \frac{\theta^2}{\theta+1} - \bar{x} \Bigg] \\[6pt]
&= - \frac{1}{\theta} \frac{d \ell_\mathbf{x}}{d \theta}(\theta) - \frac{n}{\theta^3} \Bigg( \frac{\theta^2}{\theta+1} + \bar{x} \Bigg). \\[6pt]
\end{aligned} \end{equation}$$
From this form, we see that at any critical point we must have:
$$\begin{equation} \begin{aligned}
\frac{d^2 \ell_\mathbf{x}}{d \theta^2}(\theta)
&= - \frac{n}{\theta^3} \Bigg( \frac{\theta^2}{\theta+1} + \bar{x} \Bigg) < 0. \\[6pt]
\end{aligned} \end{equation}$$
Since every critical point is a local maximum, this means that there is a unique critical point that is the MLE of the function. Thus, the MLE is obtained by solving the equation:
$$\bar{x} = \frac{(2 \hat{\theta} + 1) \hat{\theta}}{\hat{\theta} + 1}
\quad \quad \quad \implies \quad \quad \quad
2 \hat{\theta}^2 + \hat{\theta} (1-\bar{x}) - \bar{x} = 0.$$
This is a quadratic equation with explicit solution:
$$\hat{\theta} = \frac{1}{4} \Bigg[ \bar{x} - 1 + \sqrt{\bar{x}^2 + 6 \bar{x} + 1} \Bigg].$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/115962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Probability of two random variables being equal The question is as follows:
Let $X_1$ ~ Binomial(3,1/3) and $X_2$ ~ Binomial(4,1/2) be independent random variables. Compute P($X_1$ = $X_2$)
I'm not sure what it means to compute the probability of two random variables being equal.
| Let $Z=X_1-X_2$
$P(Z=z)=\sum_{x_1=z+x_2}^\infty P(X_1=x,X_2=x-z)$ (since $X_1$ and $X_2$ are independent)
$P(Z=z)=\sum_{x_1=z+x_2}^\infty P(X_1=x)P(X_2=x-z)\\=\sum_{x_1=z+x_2}^\infty \binom{3}{x}(\frac{1}{3})^x(\frac{2}{3})^{3-x}\binom{4}{4-x+z}(\frac{1}{2})^{x-z}(\frac{1}{2})^{4-x+z}$
When $X_1=X_2\Rightarrow z=0 $
Therefore
$P(Z=0)=\sum_{x_1=x_2}^\infty \binom{3}{x}(\frac{1}{3})^x(\frac{2}{3})^{3-x}\binom{4}{4-x+0}(\frac{1}{2})^{x-0}(\frac{1}{2})^{4-x+0}\\=\binom{3}{0}(\frac{1}{3})^0(\frac{2}{3})^{3-0}\binom{4}{4-0+0}(\frac{1}{2})^{0-0}(\frac{1}{2})^{4-0+0} \quad (x_1=x_2=0)\\+\binom{3}{1}(\frac{1}{3})^1(\frac{2}{3})^{3-1}\binom{4}{4-1+0}(\frac{1}{2})^{1-0}(\frac{1}{2})^{4-1+0}\quad(x_1=x_2=1)\\+\binom{3}{2}(\frac{1}{3})^2(\frac{2}{3})^{3-2}\binom{4}{4-2+0}(\frac{1}{2})^{2-0}(\frac{1}{2})^{4-2+0}\quad(x_1=x_2=2)\\+\binom{3}{3}(\frac{1}{3})^3(\frac{2}{3})^{3-3}\binom{4}{4-3+0}(\frac{1}{2})^{3-0}(\frac{1}{2})^{4-3+0}\quad(x_1=x_2=3)$
Ok, you can add up them to get the probability.
And remember for binomial distribution the random variable is total number of success. In your case $X_1$ and $X_2$ can be both 0, 1,2,3 for the total number of success of the two distribution.This answer in fact the same as the answer by Daniel and comment by ACE.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/182691",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
A GENERAL inequality for a bi-modal hypergeometric distribution Say $X$ has a hypergeometric distribution with parameters $m$, $n$ and $k$, with $k\leq n<\frac12m$.
I know that $X$ has a dual mode if and only if $d=\frac{(k+1)(n+1)}{m+2}$ is integer. In that case $P(X=d)=P(X=d-1)$ equals the maximum probability.
See my previous question. I got a great answer proving $P(X=d+1) > P(X=d-2)$. That got me wondering: can we make a more general statement? More specifically (for natural $c \leq d-2$):
$P(X=d+c) > P(X=d-1-c)$
This is true for $c = 1$, but also in many cases when $c \geq 2$. I have not found any counterexamples yet. Can this be proven? Or where to start?
| You can turn the answer from the other question into an inductive proof for this question*.
$$\tfrac{P(X=d+c+1)}{P(X=d+c)}-\tfrac{P(X=d-c-2)}{P(X=d-c-1)} = \tfrac{(k-d-c)(n-d-c)}{(d+1+c) (m-k-n+d+1+c)} -\tfrac{ (d-c-1) (m-k-n+d-c-1)}{(k-d+c+2)(n-d+c+2)} \\= \tfrac{(k-d-c)(n-d-c)(k-d+c+2)(n-d+c+2)-(d-c-1) (m-k-n+d-c-1)(d+1+c) (m-k-n+d+1+c)}{(d+1+c) (m-k-n+d+1+c)(k-d+c+2)(n-d+c+2)}$$
again the denominator is positive, and we only need to show that the numerator is positive.
We can do the same steps, substituting $d=(k+1)(n+1)/(m+2)$ gives for the numerator:
$$(c+1)^2(m-2k)(m-2n)$$
which is positive when both $k< \frac{1}{2}m$ and $n < \frac{1}{2}m$.
Some other interesting points
*
*For $c = 0$ you get the previous answer.
*For $c=-1$ you get $\frac{P(X=d)}{P(X=d-1)}-\frac{P(X=d-1)}{P(X=d)} = 0$, which is true by the assumption $P(X=d) = P(X=d-1)$.
*Also for $n=\frac{1}{2}m$ you get that the term $(m-2n)$ equals zero and you get the symmetry $P(X=d+c) = P(X=d-c-1)$
*If $\tfrac{P(X=d+c+1)}{P(X=d+c)}-\tfrac{P(X=d-c-2)}{P(X=d-c-1)}> 0$ and $$P(X=d+c) \geq P(X=d-1-c)$$ then $P(X=d+(c+1)) > P(X=d-1-(c+1))$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/459012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Distribution of the pooled variance in paired samples Suppose a bivariate normal populations with means $\mu_1$ and $\mu_2$ and equal variance $\sigma^2$ but having a correlation of $\rho$.
Taking a paired sample, it is possible to compute the pooled variance. If $S^2_1$ and $S^2_2$ are the sample variance of the first elements of the pairs and the second elements of the pairs respectively, then, let's note $S_p^2 = \frac{S^2_1+S^2_2}{2}$ the pooled variance (equivalent to the mean of the variances as the samples sizes are the same for the first elements and the second elements).
My question is: how can we demonstrate that the distribution of $S_p^2 / \sigma^2 \approx \chi^2_\nu / \nu$ with $\nu$ the degree of freedom equal to $2(n-1)/(1+\rho^2)$?
If this result is well known, what reference provided the original demonstration?
| I'm not sure about a reference for this result, but it is possible to derive it relatively easily, so I hope that suffices. One way to approach this problem is to look at it as a problem involving a quadratic form taken on a normal random vector. The pooled sample variance can be expressed as a quadratic form of this kind, and these quadratic forms are generally approximated using the chi-squared distribution (with exact correspondence in some cases).
Derivation of the result: In order to show where your assumptions come into the derivation, I will do the first part of the derivation without assuming equal variances for the two groups. If we denote your vectors by $\mathbf{X} = (X_1,...,X_n)$ and $\mathbf{Y} = (Y_1,...,Y_n)$ then your stipulated problem gives the joint normal distribution:
$$\begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix}
\sim \text{N} (\boldsymbol{\mu}, \mathbf{\Sigma} )
\quad \quad \quad
\boldsymbol{\mu} =
\begin{bmatrix} \mu_X \mathbf{1} \\ \mu_Y \mathbf{1} \end{bmatrix}
\quad \quad \quad
\mathbf{\Sigma} =
\begin{bmatrix} \sigma_X^2 \mathbf{I} & \rho \sigma_X \sigma_Y \mathbf{I} \\
\rho \sigma_X \sigma_Y \mathbf{I} & \sigma_Y^2 \mathbf{I} \end{bmatrix}.$$
Letting $\mathbf{C}$ denote the $n \times n$ centering matrix, you can write the pooled sample variance in this problem as the quadratic form:
$$\begin{align}
S_\text{pooled}^2
&= \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix}^\text{T}
\mathbf{A} \begin{bmatrix} \mathbf{X} \\ \mathbf{Y} \end{bmatrix}
\quad \quad \quad
\mathbf{A} \equiv \frac{1}{2(n-1)} \begin{bmatrix} \mathbf{C} & \mathbf{0} \\ \mathbf{0} & \mathbf{C} \end{bmatrix}. \\[6pt]
\end{align}$$
Now, using standard formulae for the mean and variance of quadradic forms of normal random vectors, and noting that $\mathbf{C}$ is an idempotent matrix (i.e., $\mathbf{C} = \mathbf{C}^2$), you have:
$$\begin{align}
\mathbb{E}(S_\text{pooled}^2)
&= \text{tr}(\mathbf{A} \mathbf{\Sigma}) + \boldsymbol{\mu}^\text{T} \mathbf{A} \boldsymbol{\mu} \\[6pt]
&= \text{tr} \Bigg( \frac{1}{2(n-1)} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) + \mathbf{0} \\[6pt]
&= \frac{1}{2(n-1)} \text{tr} \Bigg( \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix} \Bigg) \\[6pt]
&= \frac{1}{2(n-1)} \Bigg[ n \times \frac{n-1}{n} \cdot \sigma_X^2 + n \times \frac{n-1}{n} \cdot \sigma_Y^2 \Bigg] \\[6pt]
&= \frac{\sigma_X^2 + \sigma_Y^2}{2}, \\[12pt]
\mathbb{V}(S_\text{pooled}^2)
&= 2 \text{tr}(\mathbf{A} \mathbf{\Sigma} \mathbf{A} \mathbf{\Sigma}) + 4 \boldsymbol{\mu}^\text{T} \mathbf{A} \mathbf{\Sigma} \mathbf{A} \boldsymbol{\mu} \\[6pt]
&= 2 \text{tr} \Bigg( \frac{1}{4(n-1)^2} \begin{bmatrix} \sigma_X^2 \mathbf{C} & \rho \sigma_X \sigma_Y \mathbf{C} \\ \rho \sigma_X \sigma_Y \mathbf{C} & \sigma_Y^2 \mathbf{C} \end{bmatrix}^2 \Bigg) + \mathbf{0} \\[6pt]
&= \frac{1}{2(n-1)^2} \text{tr} \Bigg( \begin{bmatrix} (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} & (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} \\ (\sigma_X^2 + \sigma_Y^2) \rho \sigma_X \sigma_Y \mathbf{C} & (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \mathbf{C} \end{bmatrix} \Bigg) \\[6pt]
&= \frac{1}{2(n-1)^2} \Bigg[ n \times \frac{n-1}{n} \cdot (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + n \times \frac{n-1}{n} \cdot (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt]
&= \frac{1}{2(n-1)} \Bigg[ (\sigma_X^4 + \rho^2 \sigma_X^2 \sigma_Y^2) + (\sigma_Y^4 + \rho^2 \sigma_X^2 \sigma_Y^2) \Bigg] \\[6pt]
&= \frac{\sigma_X^4 + \sigma_Y^4 + 2 \rho^2 \sigma_X^2 \sigma_Y^2}{2(n-1)}. \\[12pt]
\end{align}$$
Using the equal variance assumption we have $\sigma_X = \sigma_Y = \sigma$ so the moments reduce to:
$$\mathbb{E} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = 1
\quad \quad \quad
\mathbb{V} \bigg( \frac{S_\text{pooled}^2}{\sigma^2} \bigg) = \frac{1+\rho^2}{n-1}.$$
It is usual to approximate the distribution of the quadratic form by a scaled chi-squared distribution using the method of moments. Equating the first two moments to that distribution gives the variance requirement $\mathbb{V}(S_\text{pooled}^2/\sigma^2) = 2/\nu$, which then gives the degrees-of-freedom parameter:
$$\nu = \frac{2(n-1)}{1+\rho^2}.$$
Bear in mind that the degrees-of-freedom parameter here depends on the true correlation coefficient $\rho$, and you may need to estimate this using the sample correlation in your actual problem.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/482118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
} |
Mean (or lower bound) of Gaussian random variable conditional on sum, $E(X^2| k \geq|X+Y|)$ Suppose I have two mean zero, independent Gaussian random variables
$X \sim \mathcal{N}(0,\sigma_1^2)$ and
$Y \sim \mathcal{N}(0,\sigma_2^2)$.
Can I say something about the conditional expectation $E(X^2| k \geq|X+Y|)$?
I think the expectation should be given by the double integral
$$ E(X^2| k \geq|X+Y|) =\frac{\int_{y=-\infty}^\infty \int_{x= y - k}^{k-y} x^2 e^{-\frac{x^2}{2\sigma^2_1}}e^{-\frac{y^2}{2\sigma^2_2}} dx dy}{\int_{y=-\infty}^\infty \int_{x= y - k}^{k-y} e^{-\frac{x^2}{2\sigma^2_1}}e^{-\frac{y^2}{2\sigma^2_2}} dx dy} \,.$$
Is it possible to get an exact expression or a lower bound for this expectation?
Edit:
Based on your comments I was able to get an intermediate expression for the nominator and the denominator.
Denominator:
It is well known that if $X \sim \mathcal{N}(0,\sigma_1^2)$ and
$Y \sim \mathcal{N}(0,\sigma_2^2)$ and $ X \perp Y$, then $X + Y \sim \mathcal{N}(0,\sigma_1^2 + \sigma_2^2)$ and therefore
\begin{equation*}
\begin{aligned}
\Pr(|X+Y| \leq k) &= \Phi \left( \frac{k}{\sigma_1 + \sigma_2} \right) - \Phi \left( \frac{-k}{\sigma_1 + \sigma_2} \right)
\end{aligned}
\end{equation*}
so that
\begin{equation*}
\begin{aligned}
\int_{y=-\infty}^\infty \int_{x= y - k}^{k-y} e^{-\frac{x^2}{2\sigma^2_1}}e^{-\frac{y^2}{2\sigma^2_2}} dx dy &= 2 \pi \sigma_1 \sigma_2 \Pr(|X+Y| \leq k) \\
&= 2 \pi \sigma_1 \sigma_2 \left\{\Phi \left( \frac{k}{\sigma_1 + \sigma_2} \right) - \Phi \left( \frac{-k}{\sigma_1 + \sigma_2} \right) \right\}
\end{aligned}
\end{equation*}
Nominator:
\begin{equation*}
\begin{aligned}
\int_{y=-\infty}^\infty \int_{x= y - k}^{k-y} x^2 e^{-\frac{x^2}{2\sigma^2_1}}e^{-\frac{y^2}{2\sigma^2_2}} dx dy & = \int_{y=-\infty}^\infty 2\int_{x= 0}^{k-y} x^2 e^{-\frac{x^2}{2\sigma^2_1}}e^{-\frac{y^2}{2\sigma^2_2}} dx dy, \quad (-x)^2 = x^2 \\
& = \int_{y=-\infty}^\infty (2\sigma_1)^{\frac{3}{2}}\int_{u= 0}^{\frac{(k-y)^2}{2 \sigma_1^2}} u^{\frac{3}{2}-1} e^{-u }e^{-\frac{y^2}{2\sigma^2_2}} du dy, \quad u = \frac{x^2}{2 \sigma_1^2} \\
& = \int_{y=-\infty}^\infty (2\sigma_1)^{\frac{3}{2}}\Gamma\left(\frac{3}{2},\frac{(k-y)^2}{2 \sigma_1^2} \right) e^{-\frac{y^2}{2\sigma^2_2}} dy, \quad \Gamma(s,x) = \int_0^x t^{s-1} e^t dt \\
& = (2\sigma_1)^{\frac{3}{2}} \sqrt{2}\sigma_2 \int_{v=-\infty}^\infty \Gamma\left(\frac{3}{2},\frac{\sigma_2^2}{\sigma_1^2}\left(\frac{k}{\sqrt{2}\sigma_2}-v\right)^2 \right) e^{-v^2} dv, \quad v = \frac{y}{\sqrt{2}\sigma_2} \\
& \geq 4 \sigma_1^{\frac{3}{2}}\sigma_2 \Gamma\left(\frac{3}{2}\right) \int_{v=-\infty}^\infty\left(1 + \frac{2}{3}\frac{\sigma_2^2}{\sigma_1^2}\left(\frac{k}{\sqrt{2}\sigma_2}-v\right)^2 \right)^{\frac{1}{2}} e^{-v(v+1)} dv
\end{aligned}
\end{equation*}
where the last inequality uses the bound from this post. Any ideas how ti simplify this further to get a nontrivial lower bound on the conditional expectation $E(X^2| k \geq|X+Y|)$ are much appreciated.
| Let's simplify a little. Define
$$(U,V) = \frac{1}{\sqrt{\sigma_X^2+\sigma_Y^2}}\left(X+Y,\ \frac{\sigma_Y}{\sigma_X}X - \frac{\sigma_X}{\sigma_Y}Y\right).$$
You can readily check that $U$ and $V$ are uncorrelated standard Normal variables (whence they are independent). In terms of them,
$$X = \frac{\sigma_X}{\sqrt{\sigma_X^2 + \sigma_Y^2}} \left(\sigma_X U + \sigma_Y V\right) = \alpha U + \beta V$$
defines the coefficients of $X$ in terms of $(U,V).$ The question desires a formula for
$$E\left[X^2 \mid |X+Y|\ge k\right] = E\left[\left(\alpha U + \beta V\right)^2 \mid |U| \ge \lambda\right]$$
with $\lambda = k\sqrt{\sigma_X^2 + \sigma_Y^2} \ge 0.$
Expanding the square, we find
$$\begin{aligned}
E\left[\left(\alpha U + \beta V\right)^2 \mid |U| \ge \lambda\right] &= \alpha^2E\left[U^2 \mid |U| \ge \lambda\right] \\&+ 2\alpha\beta E\left[UV \mid |U| \ge \lambda\right] \\&+ \beta^2 E\left[V^2 \mid |U| \ge \lambda\right].
\end{aligned}$$
The second term is zero because $E[V]=0$ and $V$ is independent of $U$. The third term is $\beta^2$ because the independence of $V$ and $U$ gives
$$E\left[V^2\mid |U|\ge \lambda\right] = E\left[V^2\right] = 1.$$
This leaves us to compute the first conditional expectation. The standard (elementary) formula expresses it as the fraction
$$E\left[U^2 \mid |U|\ge \lambda\right] = \frac{\left(2\pi\right)^{-1/2}\int_{|u|\ge \lambda} u^2 e^{-u^2/2}\,\mathrm{d}u}{\left(2\pi\right)^{-1/2}\int_{|u|\ge \lambda} e^{-u^2/2}\,\mathrm{d}u}$$
The denominator is $\Pr(|U|\ge \lambda) = 2\Phi(-\lambda)$ where $\Phi$ is the standard Normal distribution function.To compute the numerator, substitute $x = u^2/2$ to obtain
$$\frac{1}{\sqrt{2\pi}}\int_{|u|\ge \lambda}u^2 e^{-u^2/2}\,\mathrm{d}u = \frac{2^{3/2}}{\sqrt{2\pi}}\int_{\lambda^2/2}^\infty x^{3/2\,-1}\ e^{-x}\,\mathrm{d}x = \frac{1}{\Gamma(3/2)}\int_{\lambda^2/2}^\infty x^{3/2\,-1}\ e^{-x}\,\mathrm{d}x.$$
This equals $\Pr(Z\ge \lambda^2/2)$ where $Z$ has a Gamma$(3/2)$ distribution. It is a regularized incomplete gamma function, $P(3/2, \lambda^2/2).$ Consequently, with $\lambda \ge 0,$
$$E\left[\left(\alpha U + \beta V\right)^2 \mid |U| \ge \lambda\right] =\beta^2 + \frac{\alpha^2 P(3/2, \lambda^2/2)}{2 \Phi(-\lambda)}.$$
To illustrate, this R implementation of the conditional expectation (with a representing $\alpha,$ b representing $\beta,$ and $k$ representing $\lambda$) uses pnorm for $\Phi$ and pgamma for the Gamma distribution:
f <- function(a, b, k) {
b^2 + a^2 * pgamma(k^2/2, 3/2, lower.tail = FALSE) / (2 * pnorm(-k))
}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/596285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability of getting imaginary roots Let $X$ and $Y$ be independent and identically distributed uniform random variable over $(0,1)$. Let $S=X+Y$. Find the probability that the quadratic equation $9x^2+9Sx+1=0$ has no real roots.
What I attempted
For no real roots we must have $$(9S)^2-4\cdot 1\cdot 9<0$$
So, we need $P[(9S)^2-4.1.9<0]=P(9S^2<4)=P(S^2<\frac{4}{9})=P(-\frac{2}{3}<S<\frac{2}{3})=P(0<S<\frac{2}{3})$
[As $S$ can never be less than $0$]
Now, $\displaystyle P\left(0<S<\frac{2}{3}\right)=P\left(0<X+Y<\frac{2}{3}\right)=\int_{0}^{ \frac{2}{3}}\int_{0}^{ \frac{2}{3}-x}f_X(x)\cdot f_Y(y)\,dy\,dx= \int_{0}^{ \frac{2}{3}}\int_{0}^{ \frac{2}{3}-x}\,dy\,dx. $
Now, $\displaystyle \int_{0}^{ \frac{2}{3}}\int_{0}^{ \frac{2}{3}-x}\,dy\,dx=\int_{0}^{ \frac{2}{3}}\left(\frac{2}{3}-x\right)\,dx=\frac{1}{2}\left(\frac{4}{9}\right)=\frac{2}{9} $
Am I correct ?
| Yes.
And nowadays, it's easy to check for gross errors by simulation. Here is a MATLAB simulation:
>> n = 1e8; sum((9*sum(rand(n,2),2)).^2-36 < 0)/n
ans =
0.2223
In the real world, it's always good to check, or at least partially check, your work by different methods.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/345815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Conditional probability doubt Assume
$$P(B|A) = 1/5,\ P(A) = 3/4,\ P(A \cap B) = 3/20,\ \textrm{and}\ P(¬B|¬A) = 4/7.$$
Find $P(B)$.
What I tried: $$P(B)=\dfrac{ P(A \cap B)}{P(B|A)}=(3/20)/(1/5) = 3/4.$$
Answer is $P(B)=9/35.$
Where have I made the mistake?
| The probability of B can be split into the probability given A and given not A
$$P(B) = P(B|A) \cdot P(A) + P(B|\neg A) \cdot P(\neg A)$$
The negations can be replaced by one minus the actual and vice versa
$$P(B) = \frac{1}{5} \cdot \frac{3}{4} + (1-P(\neg B| \neg A)) \cdot (1-P(A))$$
$$P(B) = \frac{3}{20} + (1-\frac{4}{7})\cdot (1-\frac{3}{4})$$
$$P(B) = \frac{3}{20} + \frac{3}{7} \cdot \frac{1}{4}$$
$$p(B) = \frac{3}{20} + \frac{3}{28}$$
$$p(B) = \frac{21}{140} + \frac{15}{140}$$
$$P(B) = \frac{36}{140}$$
$$P(B) = \frac{9}{35}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/367300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Expectation of reciprocal of $(1-r^{2})$
If $r$ is the coefficient of correlation for a sample of $N$ independent observations from a bivariate normal population with population coefficietn of correlation zero, then $E(1-r^2)^{-1}$ is
(a) $\quad\frac{(N-3)}{(N-4)}$
I tried finding expectation from the density function but then realised that I was solving with the density function of $r$ and not it's square. I don't know the density function of $r^{2}$. I am stuck. Kindly help.
| From the problem statement, you are given that a sample of $N$ observations are made from a bivariate normal population with correlation coefficient equal to zero. Under these assumptions, the probability density function (PDF) for $r$ simplifies greatly to:
\begin{eqnarray*}
f_{R}(r) & = & \frac{\left(1-r^{2}\right)^{\frac{N-4}{2}}}{\text{Beta}\left(\frac{1}{2},\,\frac{N-2}{2}\right)}
\end{eqnarray*}
over the support from $-1 < r < 1$ and zero otherwise (see here).
Now, since we know the PDF of $r$, we can readily find the expected value of the transformation $g(r)=\left(1-r^{2}\right)^{-1}$ by applying the Law of the Unconscious Statistician. This yields:
\begin{align}
E[(1-r^2)^{-1}] & = \intop_{-\infty}^{\infty}g(r)f_{R}(r)dr \\
& = \intop_{-1}^{1}\frac{\left(1-r^{2}\right)^{-1}\left(1-r^{2}\right)^{\frac{N-4}{2}}}{\text{Beta}\left(\frac{1}{2},\,\frac{N-2}{2}\right)}dr\\
& = \frac{1}{\text{Beta}\left(\frac{1}{2},\,\frac{N-2}{2}\right)}\intop_{-1}^{1}\left(1-r^{2}\right)^{\frac{N-4}{2}-\frac{2}{2}}dr\\
& = \frac{1}{\text{Beta}\left(\frac{1}{2},\,\frac{N-2}{2}\right)}\intop_{-1}^{1}\left(1-r^{2}\right)^{\frac{N-6}{2}}dr \\
& = \frac{1}{\text{Beta}\left(\frac{1}{2},\,\frac{M}{2}\right)}\intop_{-1}^{1}\left(1-r^{2}\right)^{\frac{M-4}{2}}dr&&\mbox{(Letting $M=N-2$)}
\end{align}
You should note that the last integrand looks similar to the PDF of $r$, $f_R(r)$, and is simply missing a normalizing constant $1/\text{Beta}\left(\frac{1}{2},\,\frac{M-2}{2}\right)$. Multiplying the top and bottom by $\text{Beta}\left(\frac{1}{2},\,\frac{M-2}{2}\right)$ (which is simply equal to 1 and does not change the last line) and rearranging terms gives:
\begin{eqnarray*}
\frac{\text{Beta}\left(\frac{1}{2},\,\frac{M-2}{2}\right)}{\text{Beta}\left(\frac{1}{2},\,\frac{M}{2}\right)}\intop_{-1}^{1}\underbrace{\frac{\left(1-r^{2}\right)^{\frac{M-4}{2}}}{\text{Beta}\left(\frac{1}{2},\,\frac{M-2}{2}\right)}}_{\text{PDF of $f_R(r)$ integrates to 1}}dr & = & \frac{\text{Beta}\left(\frac{1}{2},\,\frac{M-2}{2}\right)}{\text{Beta}\left(\frac{1}{2},\,\frac{M}{2}\right)}\\
\end{eqnarray*}.
Now, we can simply expand out this last term by definition of the Beta function to yield:
\begin{eqnarray*}
\frac{{{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{M-2}{2}\right)}}}{{{\Gamma\left(\frac{1}{2}+\frac{M-2}{2}\right)}}}\frac{{{\Gamma\left(\frac{1}{2}+\frac{M}{2}\right)}}}{{{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{M}{2}\right)}}} & = & \frac{{{\Gamma\left(\frac{M-2}{2}\right)\Gamma\left(\frac{1}{2}+\frac{M}{2}\right)}}}{{{\Gamma\left(\frac{1}{2}+\frac{M-2}{2}\right)}}\Gamma\left(\frac{M}{2}\right)}
\end{eqnarray*}
To simplify further, we must make use of an identity of the Gamma function. The recursion identity of the Gamma function states that for $\alpha \gt 0$, $\Gamma(a+1)=a\Gamma(a)$. Since $M > 0$, we can apply this recursion identity to the $\Gamma\left(\frac{1}{2}+\frac{M}{2}\right)$ term in the numerator and the $\Gamma\left(\frac{M}{2}\right)$ term in the denominator to get:
\begin{eqnarray*}
\frac{{{\Gamma\left(\frac{M-2}{2}\right)\left(\frac{1}{2}+\frac{M}{2}-1\right)\Gamma\left(\frac{1}{2}+\frac{M}{2}-1\right)}}}{{{\Gamma\left(\frac{M-1}{2}\right)}}\left(\frac{M}{2}-1\right)\Gamma\left(\frac{M}{2}-1\right)} & = & \frac{{{\Gamma\left(\frac{M-2}{2}\right)\left(\frac{M-1}{2}\right)\Gamma\left(\frac{M-1}{2}\right)}}}{{{\Gamma\left(\frac{M-1}{2}\right)}}\left(\frac{M-2}{2}\right)\Gamma\left(\frac{M-2}{2}\right)}\\
& = & \frac{{{\frac{M-1}{2}}}}{\frac{M-2}{2}}\\
& = & {{\left(\frac{M-1}{2}\right)}}\left(\frac{2}{M-2}\right)\\
& = & \frac{M-1}{M-2}
\end{eqnarray*}
Finally, if we replace $M$ with $N-2$ from our original definition of $M$, we obtain:
\begin{eqnarray*}
\frac{(N-2)-1}{(N-2)-2} & = & \frac{N-2-1}{N-2-2}\\
& = & \frac{N-3}{N-4}
\end{eqnarray*}
and this completes the proof.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/384897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Iterated expectations and variances examples Suppose we generate a random variable $X$ in the following way. First we flip a fair coin. If the coin is heads, take $X$ to have a $Unif(0,1)$ distribution. If the coin is tails, take $X$ to have a $Unif(3,4)$ distribution.
Find the mean and standard deviation of $X$.
This is my solution. I wanted to check if it's correct or if there's a better approach.
Let $Y$ denote the random variable that is $1$ if the coin lands on a head and $0$ otherwise
Firstly $\mathbb{E}(\mathbb{E}(X|Y)) = \mathbb{E}(X)$
Thus $\mathbb{E}(\mathbb{E}(X|Y)) = \frac{1}{2} \cdot \mathbb{E}(X|Y=0) + \frac{1}{2} \cdot \mathbb{E}(X|Y=1) = \frac{1}{2} \cdot \frac{1}{2} + \frac{1}{2} \cdot \frac{7}{2}=2$
Secondly $\mathbb{V}(X) = \mathbb{E}(\mathbb{V}(X|Y))+\mathbb{V}(\mathbb{E}(X|Y))$
Now $\mathbb{V}(X|Y = 0) = \mathbb{V}(X|Y=1) = \frac{1}{12}$. Thus $\mathbb{E}(\mathbb{V}(X|Y)) = \frac{1}{12}$. Next calculating $\mathbb{V}(\mathbb{E}(X|Y)) = \mathbb{E}(\mathbb{E}(X^2|Y)) - (\mathbb{E}(\mathbb{E}(X|Y)))^2 = (\frac{1}{2} \cdot \frac{1}{4} + \frac{49}{4} \cdot \frac{1}{2}) - (2)^2 = \frac{50}{8} - 4.$
| There are generally two ways to approach these types of problems: by (1) Finding the second stage expectation $E(X)$ with the theorem
of total expectation; or by (2) Finding the second stage expectation
$E(X)$, using $f_{X}(x)$. These are equivalent methods, but you
might find one easier to comprehend, so I present them both in detail
below for $E(X)$. The approach is similar for $Var(X)$, so I exclude
its presentation, but can update my answer if you really need it.
Method (1) Finding the second stage expectation $E(X)$ with the theorem of total expectation
In this case, the Theorem of Total Expectation states that:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)P(Y=y)\\
& = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)
\end{eqnarray*}
So, we simply need to find the corresponding terms in the line above
for $y=0$ and $y=1$. We are given the following:
\begin{eqnarray*}
f_{Y}(y) & = & \begin{cases}
\frac{1}{2} & \text{for}\,y=0\,(heads),\,1\,(tails)\\
0 & \text{otherwise}
\end{cases}
\end{eqnarray*}
and
\begin{eqnarray*}
f_{X|Y}(x|y) & = & \begin{cases}
1 & \text{for}\,3<x<4;\,y=0\\
1 & \text{for}\,0<x<1;\,y=1
\end{cases}
\end{eqnarray*}
Now, we simply need to obtain $E(X|Y=y)$ for each realization of $y$:
\begin{eqnarray*}
E(X|Y=y) & = & \int_{-\infty}^{\infty}xf_{X|Y}(x|y)dx\\
& = & \begin{cases}
\int_{3}^{4}x(1)dx & \text{for}\,y=0\\
\int_{0}^{1}x(1)dx & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\left.\frac{x^{2}}{2}\right|_{x=3}^{x=4} & \text{for}\,y=0\\
\left.\frac{x^{2}}{2}\right|_{x=0}^{x=1} & \text{for}\,y=1
\end{cases}\\
& = & \begin{cases}
\frac{7}{2} & \text{for}\,y=0\\
\frac{1}{2} & \text{for}\,y=1
\end{cases}
\end{eqnarray*}
So, substituting each term into the Theorem of Total Expectation above
yields:
\begin{eqnarray*}
E(X) & = & \sum_{y=0}^{1}E(X|Y=y)f_{Y}(y)\\
& = & E(X|Y=0)f_{Y}(0)+E(X|Y=1)f_{Y}(1)\\
& = & \left(\frac{7}{2}\right)\left(\frac{1}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
Method (2) Finding the second stage expectation $E(X)$, using $f_{X}(x)$
To use this method, we first find the $f_{X,Y}(x,y)$ and $f_{X}(X)$.
To begin, recall that $f_{X,Y}(x,y)$ is given by:
\begin{eqnarray*}
f_{X,Y}(x,y) & = & f_{X|Y}(x|y)f_{Y}(y)\\
& = & \begin{cases}
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,3<x<4;\,y=0\\
\left(1\right)\left(\frac{1}{2}\right) & \text{for}\,0<x<1;\,y=1
\end{cases}\\
\end{eqnarray*}
and we can find $f_{X}(x)$ by summing out the $y$ component:
\begin{eqnarray*}
f_{X}(x) & = & \sum_{y=0}^{1}f_{X,Y}(x,y)\\
& = & f_{X,Y}(x,0)+f_{X,Y}(x,1)\\
& = & \frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)
\end{eqnarray*}
And now, we can just find $E(X)$ using the probability density function of $f_{X}(x)$ as
usual:
\begin{eqnarray*}
E(X) & = & \int_{-\infty}^{\infty}xf_{X}(x)dx\\
& = & \int_{-\infty}^{\infty}x\left[\frac{1}{2}I(3\le x\le4)+\frac{1}{2}I(0\le x\le1)\right]dx\\
& = & \frac{1}{2}\int_{-\infty}^{\infty}xI(3\le x\le4)dx+\frac{1}{2}\int_{-\infty}^{\infty}xI(0\le x\le1)dx\\
& = & \frac{1}{2}\int_{3}^{4}xdx+\frac{1}{2}\int_{0}^{1}xdx\\
& = & \left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=3}^{x=4}+\left(\frac{1}{2}\right)\left.\left(\frac{x^{2}}{2}\right)\right|_{x=0}^{x=1}\\
& = & \left(\frac{1}{2}\right)\left(\frac{7}{2}\right)+\left(\frac{1}{2}\right)\left(\frac{1}{2}\right)\\
& = & 2
\end{eqnarray*}
the same two approaches can be used to compute $Var(X)$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/404102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Can I find $f_{x,y,z}(x,y,z)$ from $f_{x,y+z}(x,y+z)$? Suppose I know densities $f_{x,y+z}(x,y+z)$, $f_y(y)$, $f_z(z)$, and $y$ and $z$ are independent. Given this information, can I derive $f_{x,y,z}(x,y,z)$?
| It is tempting to think so, but a simple counterexample with a discrete probability distribution shows why this is not generally possible.
Let $(X,Y,Z)$ take on the eight possible values $(\pm1,\pm1,\pm1).$ Let $0\le p\le 1$ be a number and use it to define a probability distribution $\mathbb{P}_p$ as follows:
*
*$\mathbb{P}_p=1/8$ whenever $Y+Z\ne 0.$
*$\mathbb{P}_p(1,-1,1) = \mathbb{P}_p(-1,-1,1)=p/4.$
*$\mathbb{P}_p(1,1,-1) = \mathbb{P}_p(-1,1,-1)=(1-p)/4.$
These probabilities evidently are positive and sum to $1:$
$$\begin{array}{crrr|rc}
\mathbb{P}_p & X & Y & Z & Y+Z & \Pr(Y,Z)\\
\hline
\frac{1}{8} & \color{gray}{-1} & \color{gray}{-1} & \color{gray}{-1} & -2 & \frac{1}{4}\\
\frac{1}{8} & 1 & \color{gray}{-1} & \color{gray}{-1} & -2 & \cdot\\
\frac{p}{4} & \color{gray}{-1} & \color{gray}{-1} & 1 & 0&\frac{1}{4}\\
\frac{1-p}{4} & 1 & \color{gray}{-1} & 1 & 0 & \cdot\\
\frac{1-p}{4} & \color{gray}{-1} & 1 & \color{gray}{-1} & 0 & \frac{1}{4}\\
\frac{p}{4} & 1 & 1 & \color{gray}{-1} & 0 & \cdot\\
\frac{1}{8} & \color{gray}{-1} & 1 & 1 & 2 & \frac{1}{4}\\
\frac{1}{8} & 1 & 1 & 1 & 2 & \cdot\\
\end{array}$$
Note two things:
*
*$Y$ and $Z$ are independent Rademacher variables. That is, $\mathbb{P}_p(Y=y,Z=z)=1/4$ for all $y,z\in\{-1,1\}.$
*The joint distribution of $(X,Y+Z)$ does not depend on $p,$ as you may check by adding (2) and (3) to deduce that $\mathbb{P}_p(X=x\mid Y+Z=0)=1/2$ for $x\in\{-1,1\}.$ (Thus, $X$ is independent of $Y+Z.$)
The value of $p$ does not appear in the marginal distributions of $Y$ and $Z$ nor in the joint distribution of $(X,Y+Z).$ Thus, these distributions do not determine $p.$ Nevertheless, different values of $p$ produce different distributions of $(X,Y,Z)$: that's the counterexample.
If you must have an example involving continuous distributions (with densities), then add an independent standard trivariate Normal variable to $(X,Y,Z).$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/520274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Integration of a equation $$\int_{x}^{y}\left[\sum_{i=1}^{N}\sqrt{a}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\right]^{\!2}da$$
Can anyone solve this integration for me
I don't know how the summation and integration will behave with each other
| You have to square the sum first. Note that the $\sqrt{a}$ is common to all terms in
$$\sqrt{a}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\cdot \sqrt{a}\cos\left(\frac{2\pi(d_{j}-a)}{\lambda} \right),$$
so it can factor out as $a.$ That is, we have
$$\int_{x}^{y}a\left[\sum_{i=1}^{N}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\right]^{\!2}da.$$
We must now square the sum in order to proceed:
\begin{align*}
&\phantom{=}\left[\sum_{i=1}^{N}\cos\left(\frac{2\pi(d_{i}-a)}{\lambda} \right)\right]^{\!2}\\
&=\sum_{i,j=1}^{N}\cos\left(\frac{2\pi(d_i-a)}{\lambda} \right)\cos\left(\frac{2\pi(d_j-a)}{\lambda} \right)\\
&=\sum_{i,j=1}^{N}\cos\left(\frac{2\pi d_i}{\lambda}-\frac{2\pi a}{\lambda} \right)\cos\left(\frac{2\pi d_j}{\lambda}-\frac{2\pi a}{\lambda}\right)\\
&=\frac12\sum_{i,j=1}^{N}\left[\cos\left(\frac{2\pi d_i}{\lambda}-\frac{2\pi d_j}{\lambda} \right)+\cos\left(\frac{2\pi d_i}{\lambda}+\frac{2\pi d_j}{\lambda}-\frac{4\pi a}{\lambda}\right)\right]\\
&=\frac12\sum_{i,j=1}^{N}\left[\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)+\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)\right].
\end{align*}
Now we can move the integral inside the sum:
\begin{align*}
\int&=\int_x^y\frac{a}{2}\sum_{i,j=1}^{N}\left[\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)+\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)\right]da\\
&=\frac12\sum_{i,j=1}^{N}\int_x^y a\left[\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)+\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)\right]da\\
&=\frac12\sum_{i,j=1}^{N}\left[\int_x^y a\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)da\right].
\end{align*}
To continue, we note that according to the comments, there exists $k,\;1\le k\le N$ such that $d_k=a.$ Without loss of generality, we will just assume that $k=N,$ so that $d_N=a.$ Now with the double-sum over $i$ and $j,$ we have four cases to deal with:
*
*$i\not=N, j\not=N.$ The integral is then
\begin{align*}
&\phantom{=}\frac12\sum_{i,j=1}^{N-1}\left[\int_x^y a\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)da\right]\\
&=\frac12\sum_{i,j=1}^{N-1}\Bigg[\left(\frac{y^2}{2}-\frac{x^2}{2}\right)\cos\left(\frac{2\pi(d_i-d_j)}{\lambda}\right)\\
&\qquad+\int_x^ya\cos\left(\frac{2\pi(d_i+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)da\Bigg]
\end{align*}
*$i\not=N, j=N.$ For this case, there are $N-1$ terms, and the integral is
\begin{align*}
&\phantom{=}\frac12\sum_{i=1}^{N-1}\left[\int_x^y a\cos\left(\frac{2\pi(d_i-a)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(d_i+a)}{\lambda}-\frac{4\pi a}{\lambda}\right)da\right]\\
&=\frac12\sum_{i=1}^{N-1}\left[\int_x^y a\cos\left(\frac{2\pi(d_i-a)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(d_i-a)}{\lambda}\right)da\right]\\
&=\sum_{i=1}^{N-1}\int_x^y a\cos\left(\frac{2\pi(d_i-a)}{\lambda}\right)da.
\end{align*}
*$i=N, j\not=N.$ For this case, there are $N-1$ terms, and the integral is
\begin{align*}
&\phantom{=}\frac12\sum_{j=1}^{N-1}\left[\int_x^y a\cos\left(\frac{2\pi(a-d_j)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(a+d_j)}{\lambda}-\frac{4\pi a}{\lambda}\right)da\right]\\
&=\frac12\sum_{j=1}^{N-1}\left[\int_x^y a\cos\left(\frac{2\pi(d_j-a)}{\lambda}\right)da+\int_x^ya\cos\left(\frac{2\pi(d_j-a)}{\lambda}\right)da\right]\\
&=\sum_{j=1}^{N-1}\int_x^y a\cos\left(\frac{2\pi(d_j-a)}{\lambda}\right)da.
\end{align*}
This is the same expression as in Case 2, so we can consolidate these two cases into one case that's doubled: Case 2 and 3:
$$2\sum_{j=1}^{N-1}\int_x^y a\cos\left(\frac{2\pi(d_j-a)}{\lambda}\right)da.$$
*$i=j=N.$ For this case, there's only $1$ term, and the integral is
$$\frac12\left[\int_x^y a\,da+\int_x^ya\,da\right]=\int_x^ya\,da=\frac{y^2}{2}-\frac{x^2}{2}.$$
This is getting rather unwieldy to continue writing down. I would just remark that the integral
$$\int_x^y a\cos(c+a)\,da=\cos(c+y)+y\sin(c+y)-\cos(c+x)-x\sin(c+x).$$
All the remaining integrals are of this form. I'll let you finish.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/527167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Covariance in the errors of random variables I have two computed variables say $x\sim N(\mu_{x}, \sigma_{x})$ and $y\sim N(\mu_y, \sigma_y)$. Additionally, the $\sigma_x$ and $\sigma_y$ are both computed from different types of errors (different components used to compute $\mu_x$ and $\mu_y$).
$$\begin{align}
\sigma_x & = \sqrt{A_x^2 + B_x^2 + C_x^2 + D_x^2}\\
\sigma_y & = \sqrt{A_y^2 + B_y^2 + C_y^2 + D_y^2}
\end{align}$$
My goal is to find the covariance in $\sigma_x$ and $\sigma_y$.
I know that (assuming A, B, C, D are independent from each other, thus cross terms are zero) for,
\begin{align}
\text{cov}([A_x, B_x, C_x, D_x], [A_y, B_y, C_y, D_y]) = \text{cov}(A_x, A_y) + \text{cov}(B_x, B_y)+ \text{cov}(C_x, C_y)+ \text{cov}(D_x, D_y)
\end{align}
However, I am stuck when I have to compute $\text{cov}(\sqrt{[A_x^2, B_x^2, C_x^2, D_x^2]}, \sqrt{[A_y^2, B_y^2, C_y^2, D_y^2]})$.
I am not sure if the relation $\sqrt{\text{cov}(A^2, B^2)} = \text{cov}(A, B)$ works.
Any help will be appreciated.
Apologies, if this question is not in the right format to ask.
EDIT:
Following of how $X$ is computed using $A$, $B$, $C$ and $D$,
\begin{align}
X = \dfrac{A}{B} + C + D
\end{align}
| Generally speaking, the relation $\sqrt{Cov(A^2,B^2)}=Cov(A,B)$ does not hold. Consider the following counterexample:
Let $X\sim U[0,2\pi]$ and $Y=\sin(x), Z=\cos(X)$. You can see here the proof for $Cov(Y,Z)=0$. Now, let's examine $Cov(Y^2,Z^2)$:
$$Cov(Y^2,Z^2)=E[(Y^2-E[Y^2])(Z^2-E[Z^2])]=E\left[(\sin^2(X)-\int_0^{2\pi}\sin^2(x)dx)(\cos^2(X)-\int_0^{2\pi}\cos^2(x)dx)\right]$$
The result of both integrals is $\pi$ so
$$Cov(Y^2,Z^2)=E\left[(\sin^2(X)-\pi)(\cos^2(X)-\pi)\right]\\=E[\sin^2(X)\cos^2(X)-\pi\sin^2(X)-\pi\cos^2(X)+\pi^2]\\=\int_0^{2\pi}\sin^2(x)\cos^2(x)dx-E[\pi(\sin^2(X)+\cos^2(X))+\pi^2]\\=\int_0^{2\pi}{\sin^2(x)\cos^2(x)dx}-E[\pi\cdot1+\pi^2]\\=\int_0^{2\pi}{\sin^2(x)\cos^2(x)dx}-\pi+\pi^2$$
The result of this integral is $\pi/4$ so overall we get $Cov(Y^2,Z^2)=\pi^2-\frac{3}{4}\pi$ and then $\sqrt{Cov(Y^2,Z^2)}\ne Cov(Y,Z)$.
The covariance of $x,y$ is defined as $Cov(x,y)=E[xy]-E[x]E[y]$. Given $x\sim N(\mu_x,\sigma_x^2), y\sim N(\mu_y,\sigma_y^2)$, we know that $E[x]E[y]=\mu_x \mu_y$ and we're left with finding $E[xy]$.
As explained here, we can write $xy$ as $\frac{1}{4}(x+y)^2-\frac{1}{4}(x-y)^2$ (check it!). For our $x,y$ we get that $$(x+y)\sim N(\mu_x+\mu_y,\sigma_x^2+\sigma_y^2+2\sigma_x\sigma_y),\qquad (x-y)\sim N(\mu_x-\mu_y,\sigma_x^2+\sigma_y^2-2\sigma_x\sigma_y)$$
Denote $S_+=\sqrt{\sigma_x^2+\sigma_y^2+2\sigma_x\sigma_y}$ and $S_-=\sqrt{\sigma_x^2+\sigma_y^2-2\sigma_x\sigma_y}$, we get:
$$\frac{(x+y)}{S_+}\sim N\left(\frac{\mu_x+\mu_y}{S_+},1\right),\qquad \frac{(x-y)}{S_-}\sim N\left(\frac{\mu_x-\mu_y}{S_-},1\right)$$
Next, we can write
$$E[xy]=\frac{1}{4}E[(x-y)^2]-\frac{1}{4}E[(x-y)^2]=\frac{1}{4}E\left[S^2_+\left(\frac{x+y}{S_+}\right)^2\right]-\frac{1}{4}E\left[S^2_-\left(\frac{x-y}{S_-}\right)^2\right]\\=\frac{1}{4}S^2_+E\left[\left(\frac{x+y}{S_+}\right)^2\right]-\frac{1}{4}S^2_-E\left[\left(\frac{x-y}{S_-}\right)^2\right]$$
Now look at $E\left[\left(\frac{x+y}{S_+}\right)^2\right]$: we know that $\frac{(x+y)}{S_+}\sim N\left(\frac{\mu_x+\mu_y}{S_+},1\right)$ so its square has a chi-square distribution with non-centrality parameter $\lambda_+=\frac{(\mu_x+\mu_y)^2}{S_+^2}$, thus $E\left[\left(\frac{x+y}{S_+}\right)^2\right]=1+\lambda_+$. In a similar manner, $E\left[\left(\frac{x-y}{S_-}\right)^2\right]=1+\lambda_-$ where $\lambda_-=\frac{(\mu_x-\mu_y)^2}{S_-^2}$. We overall get:
$$E[xy]=\frac{1}{4}S^2_+E\left[\left(\frac{x+y}{S_+}\right)^2\right]-\frac{1}{4}S^2_-E\left[\left(\frac{x-y}{S_-}\right)^2\right]\\=
\frac{1}{4}(S^2_++(\mu_x+\mu_y)^2)-\frac{1}{4}(S^2_-+(\mu_x-\mu_y)^2)\\=
\frac{1}{4}(\sigma_x^2+\sigma_y^2+2\sigma_x\sigma_y-\sigma_x^2-\sigma_y^2+2\sigma_x\sigma_y+\mu^2_x+2\mu_x\mu_y+\mu^2_y-\mu^2_x+2\mu_x\mu_y-\mu^2_y)\\=\frac{1}{4}(4\sigma_x\sigma_y+4\mu_x\mu_y)$$
And finally
$$Cov(x,y)=E[xy]-E[x]E[y]=\sigma_x\sigma_y$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/550338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability of a sum of probabilistic random variables? Suppose we have $\mathbb P(A > x) \leq m$ and $\mathbb P(B > x) \leq m$. What is $\mathbb P(A + B > y)$? I have been looking for a related axiom and not had any luck.
| Without any additional assumptions (such as independence), we can say the following:
If $x > \frac{1}{2}y$, we can't bound $\mathbb P(A + B > y)$. In fact, for any $p$, we can find $A$ and $B$ such that $\mathbb P(A + B > y) = p$ regardless of the value of $m$. Otherwise, when $x ≤ \frac{1}{2}y$, we can infer that $\mathbb P(A + B > y) ≤ m$.
To prove this, first consider the case when $x > \frac{1}{2}y$. Let $p ∈ [0, 1]$. Set $ε = \frac{1}{2}(x - \frac{1}{2}y)$. Give $(A, B)$ the following joint distribution:
*
*With probability $p$, yield $(\frac{1}{2}y + ε,\; \frac{1}{2}y + ε)$,
*With probability $1 - p$, yield $(\frac{1}{2}(y - 1),\; \frac{1}{2}(y - 1))$.
Then $A + B$ has the following distribution:
*
*With probability $p$, yield $\frac{1}{2}y + ε + \frac{1}{2}y + ε = y + 2ε$.
*With probability $1 - p$, yield $\frac{1}{2}(y - 1) + \frac{1}{2}(y - 1) = y - 1$.
Thus, $\mathbb P(A + B > y) = p$. To check that $\mathbb P(A > x) ≤ m$ (and, similarly, that $\mathbb P(B > x) ≤ m$), notice that $A$ is surely at most $\frac{1}{2}y + ε$, and
$$\begin{align}\tfrac{1}{2}y + ε
&= \tfrac{1}{2}y + \tfrac{1}{2}(x - \tfrac{1}{2}y) \\
&= \tfrac{1}{2}y + \tfrac{1}{2}x - \tfrac{1}{4}y \\
&= \tfrac{1}{2}x + \tfrac{1}{4}y \\
&< \tfrac{1}{2}x + \tfrac{1}{2}x \\
&= x,\end{align}$$
so $P(A > x) = 0 ≤ m$.
Now consider the case when $x ≤ \frac{1}{2}y$. Whenever $A + B > y$, $A + B > 2x$, so either $A > x$ or $B > x$. Thus, either $\mathbb P(A + B > y) ≤ \mathbb P(A > x)$ or $\mathbb P(A + B > y) ≤ \mathbb P(B > x)$, so $\mathbb P(A + B < y) ≤ m$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/141534",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why isn't variance defined as the difference between every value following each other? This may be a simple question for many but here it is:
Why isn't variance defined as the difference between every value following each other instead of the difference to the average of the values?
This would be the more logical choice to me, I guess I'm obviously overseeing some disadvantages. Thanks
EDIT:
Let me rephrase as clearly as possible. This is what I mean:
*
*Assume you have a range of numbers, ordered: 1,2,3,4,5
*Calculate and sum up (the absolute) differences (continuously, between every following value, not pairwise) between values (without using the average).
*Divide by number of differences
*(Follow-up: would the answer be different if the numbers were un-ordered)
-> What are the disadvantages of this approach compared to the standard formula for variance?
| Just a complement to the other answers, variance can be computed as the squared difference between terms:
$$\begin{align}
&\text{Var}(X) = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i-x_j\right)^2 = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left(x_i - \overline x -x_j + \overline x\right)^2 = \\
&\frac{1}{2\cdot n^2}\sum_i^n\sum_j^n \left((x_i - \overline x) -(x_j - \overline x\right))^2 = \\
&\frac{1}{n}\sum_i^n \left(x_i - \overline x \right)^2
\end{align}$$
I think this is the closest to the OP proposition. Remember the variance is a measure of dispersion of every observation at once, not only between "neighboring" numbers in the set.
UPDATE
Using your example: $X = {1, 2, 3, 4, 5}$. We know the variance is $Var(X) = 2$.
With your proposed method $Var(X) = 1$, so we know beforehand taking the differences between neighbors as variance doesn't add up. What I meant was taking every possible difference squared then summed:
$$Var(X) = \\ = \frac{(5-1)^2+(5-2)^2+(5-3)^2+(5-4)^2+(5-5)^2+(4-1)^2+(4-2)^2+(4-3)^2+(4-4)^2+(4-5)^2+(3-1)^2+(3-2)^2+(3-3)^2+(3-4)^2+(3-5)^2+(2-1)^2+(2-2)^2+(2-3)^2+(2-4)^2+(2-5)^2+(1-1)^2+(1-2)^2+(1-3)^2+(1-4)^2+(1-5)^2}{2 \cdot 5^2} = \\
=\frac{16+9+4+1+9+4+1+1+4+1+1+4+1+1+4+9+1+4+9+16}{50} = \\
=2$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/225734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 8,
"answer_id": 6
} |
PDF of $X^2+2aXY+bY^2$ It is my first post on this forum. I am not a mathematician (so excuse me if I don't use the right vocabulary). I have two independent Normal random variables $X$ and $Y$:
\begin{aligned}
X&\sim N(0,\sigma^{2})\\
Y&\sim N(0,s^{2})
\end{aligned}
How can I find the PDF of: $$J=X^2+2aXY+bY^2$$
where $b$ is positive, $a$ can be negative but $|a|<b$.
I've done some simulations in MATLAB, and it seems that the PDF is exponential (i.e. $\rho_J(J) \propto e^{-J/J_0})$.
Does anyone have an idea to calculate $\rho_J(J)$ ?
Thank you!
| First of all, $J$ can be rewritten like this:
$$J=\frac{b-a^2}{b} X^2+b\left(\frac{a}{b}X+Y \right)^2$$
This way, you can easily see that $J$ must be non-negative and that $J\ge \frac{b-a^2}{b} X^2$ which restricts what $X$ can be if you know $J$.
Now, find the cumulative distribution function:
$$P[J \le t]=P\left[\frac{b-a^2}{b} X^2+b\left(\frac{a}{b}X+Y \right)^2\le t \right]$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[\frac{b-a^2}{b} x^2+b\left(\frac{a}{b}x+Y \right)^2\le t \right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[b\left(\frac{a}{b}x+Y \right)^2\le t-\frac{b-a^2}{b} x^2 \right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[\left(\frac{a}{b}x+Y \right)^2\le \frac{t-\frac{b-a^2}{b} x^2}b \right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[
-\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} \le \frac{a}{b}x+Y \le \sqrt{\frac{t-\frac{b-a^2}{b} x^2}b}
\right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[
-\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} -\frac{a}{b}x\le Y \le \sqrt{\frac{t-\frac{b-a^2}{b} x^2}b}-\frac{a}{b}x
\right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
P\left[
\frac{-\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} -\frac{a}{b}x}{\sigma'} \le \frac{Y}{\sigma'} \le \frac{\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} -\frac{a}{b}x}{\sigma'}
\right] f_X(x)dx$$
$$=\int_{-\sqrt{\frac{b}{b-a^2}t}}^{\sqrt{\frac{b}{b-a^2}t}}
\left(\Phi\left( \frac{\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} -\frac{a}{b}x}{\sigma'} \right)-
\Phi\left( \frac{-\sqrt{\frac{t-\frac{b-a^2}{b} x^2}b} -\frac{a}{b}x}{\sigma'} \right)\right) \frac{1}{\sigma} \phi(\frac{x}{\sigma}) dx$$
where $\Phi$ is the standard normal distribution function and $\phi$ is the standard normal density function.
Differentiate with respect to $t$ to find the density function.
Since the limts of the integral are functions of $t$, you can use Leibniz' rule to do this.
You will then need to use numerical integration to evaluate it because there is still an integral. It doesn't look like a simple known distribution such as the Exponential. The reason your simulations suggest it might be exponential is that it is positive and possibly for particular values of $a$ and $b$ and the standard deviations it looks close to an Exponential. Try other values of $a$ and $b$ and standard deviations.
The mean and variance of $J$ are:
$$E[J]=\sigma^2+b\sigma'^2$$
$$Var[J]=2(\sigma^4+2a^2\sigma^2 \sigma'^2+b^2\sigma'^4)$$
For an exponential random variable, the variance is the square of the mean.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/507303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why the variance of Maximum Likelihood Estimator(MLE) will be less than Cramer-Rao Lower Bound(CRLB)? Consider this example. Suppose we have three events to happen with probability $p_1=p_2=\frac{1}{2}\sin ^2\theta ,p_3=\cos ^2\theta $ respectively. And we suppose the true value $\theta _0=\frac{\pi}{2}$. Now if we do $n$ times experiments, we will only see event 1 and event 2 happen. Hence the MLE should be
$$\mathrm{aug} \underset{\theta}{\max}\left[ \frac{1}{2}\sin ^2\theta \right] ^{m_1}\left[ \frac{1}{2}\sin ^2\theta \right] ^{m_2}$$
where $m_1$ is how many times event 1 happened and $m_2$ is how many times event 2 happened.
Obviously, the solution to the above optimization problem(also it's our MLE) is $\pi/2$ which is a constant and has nothing to do with $m_1,m_2$.
But let's see what is our fisher information, whose inverse should give us a lower bound of the variance of any unbiased estimator:
$$\begin{align}
&\frac{\left[ \partial _{\theta}\frac{1}{2}\sin ^2\theta \right] ^2}{\frac{1}{2}\sin ^2\theta}+\frac{\left[ \partial _{\theta}\frac{1}{2}\sin ^2\theta \right] ^2}{\frac{1}{2}\sin ^2\theta}+\frac{\left[ \partial _{\theta}\cos ^2\theta \right] ^2}{\cos ^2\theta}
\\
&=2\frac{\left[ \partial _{\theta}\frac{1}{2}\sin ^2\theta \right] ^2}{\frac{1}{2}\sin ^2\theta}+\frac{\left[ \partial _{\theta}\cos ^2\theta \right] ^2}{\cos ^2\theta}
\\
&=4\cos ^2\theta +4\sin ^2\theta
\\
&=4
\end{align}$$
Hence there's a conflict, where do I understand wrong?
Thanks in advance!
Edit
If I calculate MLE not based on the observed data, but instead, based on all the possible outcomes before we do experiments, it should be:
$$\mathrm{aug}\underset{\theta}{\max}\left[ \frac{1}{2}\sin ^2\theta \right] ^{m_1}\left[ \frac{1}{2}\sin ^2\theta \right] ^{m_2}\left[ \cos ^2\theta \right] ^{m_3}$$
taking ln we will have
$$\left( m_1+m_2 \right) \left( 2\ln\sin \theta -\ln 2 \right) +2m_3\ln\cos \theta $$
taking derivative w.r.t. $\theta$ and set derivative to be zero we will get
$$2\left( m_1+m_2 \right) \frac{\cos \theta}{\sin \theta}=2m_3\frac{\sin \theta}{\cos \theta}$$
Hence if true value $\theta_0=\pi/2$, $m_3$ will always be $0$, and we will always have a conclusion that $\hat\theta=\pi/2$ which will have no variance at all. Hence the variance of $\hat\theta$ is $0$ which against the rule of Cramer-Rao bound.
| The first issue you have here is that your likelihood function does not appear to match the description of your sampling mechanism. You say that you only observe events 1 and 2 happen, but if the sample size $n$ is known then this still fixes the number of times that event 3 happens (since $m_1 + m_2 + m_3 = n$). Taking an observation $\mathbf{m} \sim \text{Mu}(n, \mathbf{p})$ gives the likelihood function:
$$L_\mathbf{m}(\theta)
= \bigg( \frac{1}{2} \sin^2 (\theta) \bigg)^{m_1 + m_2} \bigg( \cos^2 (\theta) \bigg)^{n - m_1 - m_2},$$
which gives the log-likelihood:
$$\ell_\mathbf{m}(\theta)
= \text{const} + (m_1 + m_2) \log (\sin^2 (\theta)) + (n - m_1 - m_2) \log (\cos^2 (\theta)).$$
As you can see, the likelihood function has an extra term that you have not included in your question. We can see that $M_* = M_1+M_2 \sim \text{Bin}(n, \sin^2 (\theta))$ is a sufficient statistic in this problem so the problem is essentially one of binomial inference with a transformed probability parameter. With a bit of calculus it can be shown that the MLE solves:
$$\sin^2 (\hat{\theta}) = \frac{m_1 + m_2}{n}
\quad \quad \quad \implies \quad \quad \quad
\hat{\theta} = \text{arcsin} \bigg( \sqrt{\frac{m_1 + m_2}{n}} \bigg).$$
This estimator for the parameter is generally biased (it is unbiased in the case where $\sin^2 (\theta) = \tfrac{1}{2}$) so the applicable version of the Cramér–Rao lower bound in this case is the generalisation for biased estimators:
$$\mathbb{V}(\hat{\theta}) \geqslant \frac{|\psi'(\theta)|}{I(\theta)}
\quad \quad \quad \quad \quad
\psi(\theta) \equiv \mathbb{E}(\hat{\theta}).$$
The expectation function is:
$$\begin{align}
\psi(\theta)
&= \sum_{m=0}^n \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \cdot \text{Bin} (m|n,\sin^2 (\theta)) \\[6pt]
&= \sum_{m=0}^n {n \choose m} \cdot \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \cdot \sin^{2m} (\theta) \cdot \cos^{2(n-m)} (\theta). \\[6pt]
&= \frac{\pi}{2} \cdot \sin^{2n} (\theta) - \frac{\pi}{2} \cdot \cos^{2n} (\theta) + \sum_{m=1}^{n-1} {n \choose m} \cdot \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \cdot \sin^{2m} (\theta) \cdot \cos^{2(n-m)} (\theta), \\[6pt]
\end{align}$$
and its derivative (which appears in the bound) is:
$$\begin{align}
\psi'(\theta)
&= \sum_{m=0}^n {n \choose m} \cdot \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \frac{d}{d\theta} \bigg[ \sin^{2m} (\theta) \cdot \cos^{2(n-m)} (\theta) \bigg] \\[6pt]
&= n\pi \cdot \sin(\theta) \cos(\theta) [\sin^{2(n-1)} (\theta) + \cos^{2(n-1)} (\theta)] \\[6pt]
&\quad +
\sum_{m=1}^{n-1} {n \choose m} \cdot \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \sin^{2m-1} (\theta) \cos^{2(n-m)-1} (\theta) \\[6pt]
&\quad \quad \quad \quad \quad \times \bigg[ 2m \cos^2 (\theta) - 2(n-m) \sin^2 (\theta) \bigg]. \\[6pt]
&= n\pi \cdot \sin(\theta) \cos(\theta) [\sin^{2(n-1)} (\theta) + \cos^{2(n-1)} (\theta)] \\[6pt]
&\quad +
2 \sum_{m=1}^{n-1} {n \choose m} \cdot \text{arcsin} \bigg( \sqrt{\frac{m}{n}} \bigg) \sin^{2m-1} (\theta) \cos^{2(n-m)-1} (\theta) (m - n \sin^2 (\theta)). \\[6pt]
\end{align}$$
As you can see, this is a more complicated form for the Cramér–Rao lower bound. Nevertheless, it should hold in this problem.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/592676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
A trivial question about Covariance I'm just learning about Covariance and encountered something I don't quite understand.
Assume we have two random variables X and Y, where the respective joint-probability function assigns equal weights to each event.
According to wikipedia the Cov(X, Y) can then be caluculated as follows:
$${\displaystyle \operatorname {cov} (X,Y)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-E(X))(y_{i}-E(Y)).}$$
What confuses me is the fact, that they sum only over $x_i$ and $y_i$; $i=1,...,n$ , but not $x_i$ and $y_j; \;i=1,...,n$ $j=1,...,m$ , thus many possible combinations are not calculated. In other words if we look at each calculated combination individually, we only get a $n*1$ matrix, instead of a $n*m$ matrix.
Can anyone explain this (I suppose it's rather obvious but I just don't see the reason at the moment).
| The idea is that the possible outcomes in your sample are $i=1, \ldots, n$, and each outcome $i$ has equal probability $\frac{1}{n}$ (under the probability measure that assigns equal probability to all outcomes that you appear to be using). You have $n$ outcomes, not $n^2$.
To somewhat indulge your idea, you could compute:
$$ \operatorname{Cov}(X,Y) = \sum_i \sum_j (x_i - \mu_x) (y_j - \mu_y) P(X = x_i, Y = y_j )$$
Where:
*
*$P(X = x_i, Y = y_j) = \frac{1}{n}$ if $i=j$ since that outcome occurs $1/n$ times.
*$P(X = x_i, Y = y_j) = 0 $ if $i \neq j$ since that outcome doesn't or didn't
occur.
But then you'd just have:
$$ \sum_i \sum_j (x_i - \mu_x) (y_j - \mu_y) P(X = x_i, Y = y_j ) = \sum_i (x_i - \mu_x) (y_i - \mu_y) P(X = x_i, Y = y_i ) $$
Which is what the original formula is when $P(X = x_i, Y = y_i ) = \frac{1}{n}$.
You intuitively seem to want something like $P(X = x_i, Y = y_j ) = \frac{1}{n^2}$ but that is seriously wrong.
Simple dice example (to build intuition):
Let $X$ be the result of a roll of a single 6 sided die. Let $Y = X^2$.
Recall that a probability space has three components: a sample space $\Omega$, a set of events $\mathcal{F}$, and a probability measure $P$ that assigns probabilities to events. (I'm going to hand wave away the event stuff to keep it simpler.)
$X$and $Y$ are functions from $\Omega$ to $\mathcal{R}$. We can write out the possible values for $X$ and $Y$ as a function of $\omega \in \Omega$
$$ \begin{array}{rrr} & X(\omega) & Y(\omega) \\
\omega_1 & 1 & 1\\
\omega_2 & 2 & 4 \\
\omega_3 & 3 & 9 \\
\omega_4 & 4 & 16 \\
\omega_5 & 5 & 25 \\
\omega_6 & 6 & 36
\end{array}
$$
We don't have 36 possible outcomes here. We have 6.
Since each outcome of a die is equally likely, we have $P( \{ \omega_1) \}) = P( \{ \omega_2) \}) = P( \{ \omega_3) \}) = P( \{ \omega_4) \}) = P( \{ \omega_5)\}) = P( \{ \omega_6) \}) = \frac{1}{6}$. (If your die wasn't fair, these numbers could be different.)
What's the mean of $X$?
\begin{align*}
\operatorname{E}[X] = \sum_{\omega \in \Omega} X(\omega) P( \{ \omega \} ) &= 1 \frac{1}{6} + 2\frac{1}{6} + 3 \frac{1}{6} + 4 \frac{1}{6} + 5 \frac{1}{6} + 6 \frac{1}{6}\\
&= \frac{7}{2}
\end{align*}
What's the mean of $Y$?
\begin{align*}
\operatorname{E}[Y] = \sum_{\omega \in \Omega} X(\omega) P( \{ \omega \} ) &= 1 \frac{1}{6} + 4\frac{1}{6} + 9 \frac{1}{6} + 16 \frac{1}{6} + 25 \frac{1}{6} + 36 \frac{1}{6}\\
&= \frac{91}{6}
\end{align*}
What's the covariance of $X$ and $Y$?
\begin{align*}
\sum_{\omega \in \Omega} \left(X(\omega) - \frac{7}{2}\right)\left( Y(\omega) - \frac{91}{6}\right) P( \{ \omega \} ) &= \left( 1 - \frac{7}{2} \right)\left( 1 - \frac{91}{6} \right) P(\{\omega_1\}) + \left( 2 - \frac{7}{2} \right)\left( 4 - \frac{91}{6} \right) P(\{\omega_2\}) + \left( 3 - \frac{7}{2} \right)\left( 9 - \frac{91}{6} \right) P(\{\omega_3\}) + \left( 4 - \frac{7}{2} \right)\left( 16 - \frac{91}{6} \right) P(\{\omega_4\}) + \left( 5 - \frac{7}{2} \right)\left( 25 - \frac{91}{6} \right) P(\{\omega_5\}) + \left( 6 - \frac{7}{2} \right)\left( 36 - \frac{91}{6} \right) P(\{\omega_6\}) \\
&\approx 20.4167
\end{align*}
Don't worry about the arithmetic. The point is that to calculate $\operatorname{Cov}\left(X , Y\right) = \operatorname{E}\left[(X -\operatorname{E}[X])(Y - \operatorname{E}[Y]) \right] = \sum_{\omega \in \Omega} \left(X(\omega) - \operatorname{E}[X]\right)\left( Y(\omega) - \operatorname{E}[Y]\right) P( \{ \omega \} ) $ you sum over the 6 possible outcomes $\omega_1, \ldots, \omega_6$.
Back to your situation...
The possible outcomes in your sample are $i=1\, \ldots, n$. Those are the outcomes you should sum over.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/266856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How can we apply the rule of stationary distribution to the continuous case of Markov chain? If the Markov chain converged then $$\pi = Q* \pi$$where $ \pi$ is the posterior distribution and $Q$ is the transition distribution(it's a matrix in the discrete case).
I tried to apply that on the continuous case of Markov chain in this example where
the transition distribution is:
$$p(X_{t+1} | X_t=x_t) = \text{N}(\phi x_t, 1)$$
and the posterior(stationary) distribution is
$$X_t \sim \text{N} \Bigg( 0, \frac{1}{1-\phi^2} \Bigg)$$
The product of them both doesn't equal the posterior.
| That stationary distribution is correct. Using the law of total probability, you have:
$$\begin{equation} \begin{aligned}
p(T_{t+1} = x)
&= \int \limits_\mathbb{R} p(X_{t+1} = x | X_t = r) \cdot p(X_t = r) \ dr \\[6pt]
&= \int \limits_{-\infty}^\infty \text{N}(x | \phi r, 1) \cdot \text{N} \bigg( r \bigg| 0, \frac{1}{1-\phi^2} \bigg) \ dr \\[6pt]
&= \int \limits_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \exp \bigg( -\frac{1}{2} (x - \phi r)^2 \bigg) \cdot \sqrt{\frac{1-\phi^2}{2 \pi}} \exp \bigg( -\frac{1}{2} (1-\phi^2) r^2 \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{2 \pi} \int \limits_{-\infty}^\infty \exp \bigg( - \frac{1}{2} (x - \phi r)^2 - \frac{1}{2} (1-\phi^2) r^2 \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{2 \pi} \int \limits_{-\infty}^\infty \exp \bigg( - \frac{1}{2} \bigg[ (x - \phi r)^2 + (1-\phi^2) r^2 \bigg] \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{2 \pi} \int \limits_{-\infty}^\infty \exp \bigg( - \frac{1}{2} \bigg[ x^2 - 2 \phi x r + \phi^2 r^2 + r^2 - \phi^2 r^2 \bigg] \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{2 \pi} \int \limits_{-\infty}^\infty \exp \bigg( - \frac{1}{2} \bigg[ x^2 - 2 \phi x r + r^2 \bigg] \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{2 \pi} \int \limits_{-\infty}^\infty \exp \bigg( - \frac{1}{2} \bigg[ x^2 (1 - \phi^2) + (r-\phi x)^2 \bigg] \bigg) \ dr \\[6pt]
&= \frac{\sqrt{1-\phi^2}}{\sqrt{2 \pi}} \exp \bigg( -\frac{1}{2} (1-\phi^2) x^2 \bigg) \int \limits_{-\infty}^\infty \frac{1}{\sqrt{2 \pi}} \exp \bigg( - \frac{1}{2} (r-\phi x)^2 \bigg) \ dr \\[6pt]
&= \text{N} \bigg( x \bigg| 0, \frac{1}{1-\phi^2} \bigg) \times \int \limits_{-\infty}^\infty \text{N} (r|\phi x,1) \ dr \\[6pt]
&= \text{N} \bigg( x \bigg| 0, \frac{1}{1-\phi^2} \bigg). \\[6pt]
\end{aligned} \end{equation}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/423484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finite Population Variance for a Changing Population How does the addition of one unit affect the population variance of a finite population if everything else remains unchanged? What are the conditions such that the new unit leaves the variance unchanged (increases/decreases it)?
I was able to find the following paper regarding sample variances for changing finite populations:
http://www.amstat.org/sections/srms/Proceedings/papers/1987_087.pdf.
But I am asking specifically about population variances. Any help is appreciated.
| I was unable to find the sample calculations that correspond to the specific problem here (as suggested by Glen_b), but I was able to confirm the following answer with numerical calculations in R at the bottom of this answer.
Let $N$ be the initial number of units in the population and $N + 1$ be the number of units in the population after the change. Denote the initial set of observations $X = \{x_1, \ldots, x_N\}$ (i.e., one observation corresponding to each population unit). Denote the set of observations after the change $Y = X \cup \{x_{N+1}\}$.
The mean of $X$ is
$\mu_X = \frac{\sum_{i=1}^N{x_i}}{N}$.
The mean of Y is
$\mu_Y = \frac{\sum_{i=1}^{N+1}{x_i}}{N+1}
= \mu_X \frac{N}{N+1} + \frac{x_{N+1}}{N+1}$
Define $x_{N+1}$ as the original mean, $\mu_X$, plus some $\varepsilon$. Then,
the mean of $Y$ is
$\mu_Y = \mu_X \frac{N}{N+1} + \frac{\mu_X
+ \varepsilon}{N+1} = \mu_X + \frac{\varepsilon}{N+1}$
The variance of $Y$ is
$\sigma^2_Y = \frac{\sum_{i=1}^{N+1}
\left(x_i - \mu_Y \right)^2}{N+1} =
\frac{\sum_{i=1}^{N+1}
\left(x_i - \mu_X - \frac{\varepsilon}{N + 1} \right)^2}{N+1}$
$= \frac{\sum_{i=1}^{N} x_i^2 + \mu_X^2
+ \frac{\varepsilon^2}{\left(N+1\right)^2} - 2x_i\mu_X
- 2x_i\frac{\varepsilon}{N+1} + 2\mu_X\frac{\varepsilon}{N+1}}{N + 1}$
$\frac{\left(\mu_X + \varepsilon - \mu_X - \frac{\varepsilon}{N + 1}\right)}{N
+ 1} $
$ = \frac{N}{N+1}\sigma^2_X + \frac{N\varepsilon^2}{\left(N+1\right)^3}
- \frac{2N\mu_X\varepsilon}{\left(N+1\right)^2}
+ \frac{2N\mu_X\varepsilon}{\left(N+1\right)^2}
+ \frac{N^2\varepsilon^2}{\left(N+1\right)^3}$
$ = \frac{N}{N+1}
\sigma^2_X + \frac{N}{\left(N+1\right)^2}\varepsilon^2$
When $x_{N+1}$ is equal to $\mu_X$, the variance of $Y$ is
$\frac{N}{N+1}\sigma^2_X < \sigma^2_X $
Thus, when $\varepsilon$ is sufficiently small $\sigma^2_Y$ is less than
$\sigma^2_X$. To determine how large $\varepsilon$ should be so that the
variance of $Y$ is greater than the variance of $X$, I set the two variances
equal.
$ \frac{N}{N+1} \sigma^2_X
+ \frac{N}{\left(N+1\right)^2}\varepsilon^2 = \sigma^2_X$
$ \frac{N}{\left(N+1\right)^2}\varepsilon^2 = \frac{1}{N+1} \sigma^2_X$
$ \varepsilon^2 = \frac{N+1}{N} \sigma^2_X $
$ \varepsilon = \pm \sigma_X
\sqrt{\frac{N+1}{N}}$
Thus, adding a unit who's observation is within $\sqrt{\frac{N+1}{N}}$ standard deviations
of the old mean will lead to a lower variance.
The following R script verifies the above conclusion:
N <- 10
X <- runif(N)
width <- sqrt((N+1)/N)
# on the boundary
var(c(X, mean(X) + width * sqrt(var(X)))) - var(X) == 0
# outside the boundary
var(c(X, mean(X) + width * sqrt(var(X)) + 1)) - var(X) > 0
# inside the boundary
var(c(X, mean(X))) - var(X) < 0
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/117111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Find the mean and standard error for mean. Have I used correct formulas in this given situation?
A university has 807 faculty members. For each faculty member, the number of refereed publications was recorded. This number is not directly available on the database, so requires the investigator to examine each record separately. A frequency table number of refereed publication is given below for an SRS of 50 faculty members.
\begin{array}{|c|c|c|c|}
\hline
\text{Refereed Publications}& 0 & 1 & 2& 3 & 4 & 5& 6 & 7 & 8& 9 & 10 \\ \hline
\text{Faculty Members} &28 &4 &3& 4 & 4 & 2& 1 & 0 & 2& 1 & 1\\ \hline
\end{array}
b) Estimate the mean number of publications per faculty member, and give the SE for your estimate:
\begin{array}{|c|c|c|c|}
\hline
y& f & f*y & f*y^2 \\ \hline
0 &28 & 0&0\\ \hline
1 & 4& 4&4\\ \hline
2& 3& 6&12\\ \hline
3& 4& 12&36\\ \hline
4& 4& 16&64\\ \hline
5& 2& 10&50\\ \hline
6 & 1& 6&36\\ \hline
7 & 0& 0&0\\ \hline
8& 2& 16&128\\ \hline
9& 1& 9&81\\ \hline
10& 1& 10&100\\ \hline
\text{total}& 50&89 &511\\ \hline
\end{array}
\begin{align}
\bar{y} &= \frac{\sum fy}{\sum f} &\rightarrow& &\frac{89}{50} &= 1.78 \\[10pt]
SD &= \sqrt{ \frac{\sum fy^2}{\sum f}-(\bar{y})^2} &\rightarrow& &\sqrt{\frac{511}{50}-(1.78)^2} &= 2.66 \\[10pt]
SE(\bar{y}) &= \frac{s}{\sqrt{n}}\sqrt{1-\frac{n}{N}} &\rightarrow& &\frac{2.66}{\sqrt{50}}\sqrt{1-\frac{50}{807}} &= 0.3643
\end{align}
Did I do it correctly?
d) Estimate the proportion of faculty members with no publications and give a $95\%$ CI.
\begin{align}
p &= \frac{y}{n} &\rightarrow& &\frac{28}{50} &= 0.56 \\[10pt]
SE(p) &= \sqrt{\frac{p(1-p)}{n-1}\bigg(1-\frac{n}{N}\bigg)} &\rightarrow& &\sqrt{\frac{0.56(0.44)}{49}\bigg(1-\frac{50}{807}\bigg)} &= 0.0687 \\[10pt]
95\%\ CI &= p\pm 1.96[SE(p)] &\rightarrow& &0.56 \pm1.96(0.0687) &= [0.425,0.695]
\end{align}
Am I using the correct formulas?
| THE formulas for the descriptive and inference statistics used by you are correct in terms of SRS ((simple random sampling ) and variance-estimation.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/304894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Stationary Distributions of a irreducible Markov chain I was trying to get all Stationary Distributions of the following Markov chain. Intuitively, I would say there are two resulting from splitting op the irreducible Markov chain into two reducible ones. However, I feel this is not mathematically correct. How else would I be able to find all stationary distributions?
\begin{bmatrix}
\frac{1}{3} & \frac{2}{3} & 0 & 0 & 0 \\
\frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\
0 & \frac{1}{2} & 0 & \frac{1}{2} & 0 \\
0 & 0 & 0 & \frac{1}{4} & \frac{3}{4} \\
0 & 0 & 0 & \frac{1}{3} & \frac{2}{3} \\
\end{bmatrix}
| Conditioned on $X_0\in\{0,1\}$ we have from solving
\begin{align}
\pi_0 &= \frac13\pi_0 + \frac12\pi_1\\
\pi_1 &= \frac23\pi_0 + \frac12\pi_1\\
\pi_0 + \pi_1 &= 1
\end{align}
$\pi_0 = \frac37$, $\pi_1=\frac 47$.
Conditioned on $X_0\in\{3,4\}$ we have by solving a similar system of equations $\pi_3 = \frac4{13}$, $\pi_4=\frac9{13}$.
Conditioned on $X_0=2$ we have $\tilde\pi_i = \frac12\pi_i$ for $i\in\{0,1,3,4\}$.
So we have three stationary distributions: those obtained by conditioning on $X_0\in\{0,1\}$ and $X_0\in\{3,4\}$, and the one obtained by conditioning on $X_0=2$:
$$
\tilde\pi = \left(
\begin{array}{ccccc}
\frac{3}{14} & \frac{2}{7} & 0 & \frac{2}{13} & \frac{9}{26} \\
\end{array}
\right).
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/438165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
MSE of the Jackknife Estimator for the Uniform distribution The Jackknife is a resampling method, a predecessor of the Bootstrap, which is useful for estimating the bias and variance of a statistic. This can also be used to apply a "bias correction" to an existing estimator.
Given the estimand $\theta$ and an estimator $\hat\theta \equiv \hat\theta(X_1, X_2, \cdots X_n)$, the Jackknife estimator (with respect to $\hat\theta$) is defined as
$$\hat\theta_J = \hat\theta + (n-1)\left(\hat\theta - \frac{1}{n}\sum_{i=1}^n\hat\theta_{-i}\right),$$
where the $\hat\theta_{-i}$ terms denote the estimated value ($\hat\theta$) after "holding out" the $i^{th}$ observation.
Let $X_1, X_2, \cdots X_n \stackrel{\text{iid}}{\sim} \text{Unif}(0, \theta)$ and consider the estimator $\hat\theta = X_{(n)}$ (i.e. the maximum value, also the MLE). Note that
$$\hat\theta_{-i} = \begin{cases}
X_{(n-1)}, & X_i = X_{(n)} \\[1.2ex]
X_{(n)}, & X_i \neq X_{(n)}
\end{cases}.$$
Thus the Jackknife estimator here can be written as a linear combination of the two largest values
\begin{align*}
\hat\theta_J &= X_{(n)} + \frac{n-1}{n}\left(X_{(n)} - X_{(n-1)}\right) \\[1.3ex]
&= \frac{2n-1}{n}X_{(n)} - \frac{n-1}{n} X_{(n-1)}.
\end{align*}
What is the bias, variance and mean square error ?
| It is well known that the order statistics, sampled from a uniform distribution, are Beta-distributed random variables (when properly scaled).
$$\frac{X_{(j)}}{\theta} \sim \text{Beta}(j, n+1-j)$$
Using standard properties of the Beta distribution we can obtain the mean and variance of $X_{(n)}$ and $X_{(n-1)}$.
Bias
\begin{align*}
E\left(\hat\theta_J\right) &= \frac{2n-1}{n}E(X_{(n)}) - \frac{n-1}{n}E(X_{(n-1)}) \\[1.3ex]
&= \frac{2n-1}{n}\frac{n}{n+1}\theta - \frac{n-1}{n}\frac{n-1}{n+1}\theta \\[1.3ex]
&= \frac{n(n+1) - 1}{n(n+1)} \theta
\end{align*}
Therefore the bias of $\hat\theta_J$ is given by
$$\text{Bias}_\theta(\hat\theta_J) = \frac{-\theta}{n(n+1)}$$
Variance
Note: Covariance is derived here.
\begin{align*}
\text{Var}\left(\hat\theta_J\right) &= \frac{(2n-1)^2}{n^2}\text{Var}(X_{(n)}) + \frac{(n-1)^2}{n^2}\text{Var}(X_{(n-1)}) - 2 \frac{2n-1}{n}\frac{n-1}{n}\text{Cov}(X_{(n)}, X_{(n-1)}) \\[1.3ex]
&= \frac{(2n-1)^2}{n^2}\frac{n\theta^2}{(n+1)^2(n+2)} + \frac{(n-1)^2}{n^2}\frac{2(n-1)\theta^2}{(n+1)^2(n+2)} - \frac{2(2n-1)(n-1)}{n^2}\frac{(n-1)\theta^2}{(n+1)^2(n+2)} \\[1.5ex]
\end{align*}
$$= \frac{(2n^2-1)\theta^2}{n(n+1)^2(n+2)}$$
MSE
Using the decomposition $\text{MSE}_\theta(\hat\theta) = \text{Bias}^2_\theta(\hat\theta) + \text{Var}(\hat\theta)$, we have
\begin{align*}
\text{MSE}_\theta(\hat\theta_J) &= \left(\frac{-\theta}{n(n+1)}\right)^2 + \frac{(2n^2-1)\theta^2}{n(n+1)^2(n+2)} \\[1.3ex]
&= \frac{2(n-1+1/n)\theta^2}{n(n+1)(n+2)}\\[1.3ex]
&= \mathcal O(n^{-2})
\end{align*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/458883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Pdf of the sum of two independent Uniform R.V., but not identical
Question. Suppose $X \sim U([1,3])$ and $Y \sim U([1,2] \cup [4,5])$ are two independent random variables (but obviously not identically distributed). Find the pdf of $X + Y$.
So far. I'm familiar with the theoretical mechanics to set up a solution. So, if we let $\lambda$ be the Lebesgue measure and notice that $[1,2]$ and $[4,5]$ disjoint, then the pdfs are
$$f_X(x) =
\begin{cases}
\frac{1}{2}, &x \in [1,3] \\
0, &\text{otherwise}
\end{cases}
\quad\text{and}\quad
f_Y(y) =
\begin{cases}
\frac{1}{\lambda([1,2] \cup [4,5])} = \frac{1}{1 + 1} = \frac{1}{2}, &y \in [1,2] \cup [4,5] \\
0, &\text{otherwise}
\end{cases}
$$
Now, let $Z = X + Y$. Then, the pdf of $Z$ is the following convolution
$$f_Z(t) = \int_{-\infty}^{\infty}f_X(x)f_Y(t - x)dx = \int_{-\infty}^{\infty}f_X(t -y)f_Y(y)dy.$$
To me, the latter integral seems like the better choice to use. So, we have that $f_X(t -y)f_Y(y)$ is either $0$ or $\frac{1}{4}$. But I'm having some difficulty on choosing my bounds of integration?
| Here is a plot as suggested by comments
What I was getting at is it is a bit cumbersome to draw a picture for problems where we have disjoint intervals (see my comment above). It's not bad here, but perhaps we had $X \sim U([1,5])$ and $Y \sim U([1,2] \cup [4,5] \cup [7,8] \cup [10, 11])$.
Using @whuber idea: We notice that the parallelogram from $[4,5]$ is just a translation of the one from $[1,2]$. So, if we let $Y_1 \sim U([1,2])$, then we find that
$$f_{X+Y_1}(z) =
\begin{cases}
\frac{1}{4}z - \frac{1}{2}, &z \in (2,3) \tag{$\dagger$}\\
\frac{1}{2}z - \frac{3}{2}, &z \in (3,4)\\
\frac{5}{4} - \frac{1}{4}z, &z \in (4,5)\\
0, &\text{otherwise}
\end{cases}
$$
Since, $Y_2 \sim U([4,5])$ is a translation of $Y_1$, take each case in $(\dagger)$ and add 3 to any constant term. Then you arrive at ($\star$) below.
Brute force way:
*
*$\mathbf{2 < z < 3}$: $y=1$ to $y = z-1$, which gives $\frac{1}{4}z - \frac{1}{2}$.
*$\mathbf{3 < z < 4}$: $y=1$ to $y = z-1$, such that $2\int_1^{z-1}\frac{1}{4}dy = \frac{1}{2}z - \frac{3}{2}$.
*$\mathbf{4 < z < 5}$: $y=z-3$ to $y=2$, which gives $\frac{5}{4} - \frac{1}{4}z$.
*$\mathbf{5 < z < 6}$: $y=4$ to $y = z-1$, which gives $\frac{1}{4}z - \frac{5}{4}$.
*$\mathbf{6 < z < 7}$: $y = 4$ to $y = z-2$, such that $2\int_4^{z-2}\frac{1}{4}dy = \frac{1}{2}z - 3$.
*$\mathbf{7 < z < 8}$: $y = z-3$ to $y=5$, which gives $2 - \frac{1}{4}z$.
Therefore,
$$f_Z(z) =
\begin{cases}
\frac{1}{4}z - \frac{1}{2}, &z \in (2,3) \tag{$\star$}\\
\frac{1}{2}z - \frac{3}{2}, &z \in (3,4)\\
\frac{5}{4} - \frac{1}{4}z, &z \in (4,5)\\
\frac{1}{4}z - \frac{5}{4}, &z \in (5,6)\\
\frac{1}{2}z - 3, &z \in (6,7)\\
2 - \frac{1}{4}z, &z \in (7,8)\\
0, &\text{otherwise}
\end{cases}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/489224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Generating random variables from a given distribution function using inversion sampling Given this distribution function $f(x)$ :
$$
f\left(x\right)=\left\{\begin{matrix}x+1,-1\le x\le0\\1-x,0<x\le1\\\end{matrix}\right.
$$
Generate random variables using Inverse sampling method in R:
here is my attempt :
f <- function(x){
ifelse(x<=0&&x>=-1,x+1,1-x)
}
integrate(Vectorize(f),-1,1)$value == TRUE
plot(Vectorize(f),xlim = c(-2,2))
$$
F\left(x\right)=\left\{\begin{matrix}\frac{x^2}{2}+x,-1\le x\le0\\x-\frac{x^2}{2},0<x\le1\\\end{matrix}\right.
$$
$F^{-1}$:
F_inver <- function(x){ifelse(x<=0&&x>=-1,1-sqrt(2*x+1),1-sqrt(-2*x+1))}
I believe that the my inverse function isn't correct
| The cumulative distribution function, $F(x)$, is given by
$$
F(x) = \int_{-\infty}^x f(t)dt
$$
So for, $- 1 \leq x \leq 0$,
\begin{align*}
F(x) &= \int_{-\infty}^x f(t)dt \\
&= \frac{x^2}{2} +x + \frac{1}{2}
\end{align*}
and for $0 \leq x \leq 1$,
\begin{align*}
F(x) &= \int_{-\infty}^0 f(t)dt + \int_0^x f(t)dt \\
& = \frac{1}{2} + x - \frac{x^2}{2}
\end{align*}
Thus,
$$
F\left(x\right)=\left\{
\begin{align*}
&\frac{x^2}{2}+x + \frac{1}{2},-1 \leq x \leq 0 \\
&\frac{1}{2} +x -\frac{x^2}{2},0< x \leq 1
\end{align*}
\right.
$$
For $0 \leq y \leq \frac{1}{2}$,
\begin{align*}
y = F(x) &\iff y = \frac{x^2}{2}+x + \frac{1}{2} \\
&\iff \frac{x^2}{2} +x + \frac{1}{2} - y = 0
\end{align*}
The last line is a second order polynomial equation whose determinant is $2y >0$.
The solutions are thus $-1 \pm \sqrt{2y}$. Since $-1 \leq 0 \leq x$, we have $x= -1 + \sqrt{2y}$
For $\frac{1}{2} \leq y \leq 1$,
\begin{align*}
y = F(x) &\iff y = -\frac{x^2}{2}+ x + \frac{1}{2} \\
&\iff -\frac{x^2}{2} +x + \frac{1}{2} - y = 0
\end{align*}
Repeating the same process as before we find
$$
x = 1 - \sqrt{2(1-y)}
$$
Thus the inverse function of $F(x)$ (the quantile function) is given by:
$$
F^{-1}(y) = \left\{
\begin{align*}
&-1 + \sqrt{2y}, \ 0 \leq y \leq \frac{1}{2} \\
&1-\sqrt{2(1-y)}, \ \frac{1}{2} < y \leq 1
\end{align*}
\right.
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/526178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Matrix representation of the OLS of an AR(1) process, Is there any precise way to express the OLS estimator of the centred error terms $\{u_t\}
_{t=1}^{n}$ that follows an AR(1) process? In other words, for
\begin{equation}
u_t=\rho u_{t-1}+\varepsilon_t,\quad \varepsilon_t\sim N(0,\sigma^2)
\end{equation}
is there a matrix representation for
\begin{equation}
\hat{\rho}=\frac{(1/n)\sum\limits_{t=1}^{n}u_tu_{t-1}}{(1/n)\sum\limits_{t=1}^{n}u_{t-1}^2}
\end{equation}
? I suspect there should be. However, I seem to fail to find it in Hamilton or other sources or derive an elegant expression myself.
Much appreciated in advance
| To facilitate our analysis, we will use the following $(n-1) \times n$ matrices:
$$\mathbf{M}_0 \equiv \begin{bmatrix}
1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0 \\
\end{bmatrix}
\quad \quad \quad
\mathbf{M}_1 \equiv \begin{bmatrix}
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
0 & 0 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & 1 \\
\end{bmatrix},$$
and the following $n \times n$ matrices:
$$\begin{align}
\mathbf{G}_0
&\equiv \mathbf{M}_0^\text{T} \mathbf{M}_1
= \begin{bmatrix}
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
0 & 0 & 0 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 0 & 1 \\
0 & 0 & 0 & \cdots & 0 & 0 \\
\end{bmatrix} \\[20pt]
\mathbf{G}_1
&\equiv \mathbf{M}_0^\text{T} \mathbf{M}_0
= \begin{bmatrix}
1 & 0 & 0 & \cdots & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 \\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0 \\
0 & 0 & 0 & \cdots & 0 & 0 \\
\end{bmatrix}.
\end{align}$$
Given the observable time-series vector $\mathbf{u} = (u_1,...,u_n)$ we can then write the model in matrix form as:
$$\mathbf{M}_1 \mathbf{u} = \rho \mathbf{M}_0 \mathbf{u} + \sigma \boldsymbol{\varepsilon}
\quad \quad \quad \quad \quad
\boldsymbol{\varepsilon} \sim \text{N}(\mathbf{0}, \mathbf{I}).$$
The OLS estimator for the parameter $\rho$ is:
$$\begin{align}
\hat{\rho}_\text{OLS}
&= (\mathbf{u}^\text{T} \mathbf{M}_0^\text{T} \mathbf{M}_0 \mathbf{u} )^{-1} (\mathbf{u}^\text{T} \mathbf{M}_0^\text{T} \mathbf{M}_1 \mathbf{u} ) \\[12pt]
&= (\mathbf{u}^\text{T} \mathbf{G}_1 \mathbf{u} )^{-1} (\mathbf{u}^\text{T} \mathbf{G}_0 \mathbf{u} ) \\[12pt]
&= \frac{\mathbf{u}^\text{T} \mathbf{G}_0 \mathbf{u}}{\mathbf{u}^\text{T} \mathbf{G}_1 \mathbf{u}} \\[12pt]
&= \frac{\sum_{i=1}^{n-1} u_i u_{i+1}}{\sum_{i=1}^{n-1} u_i^2 }.
\end{align}$$
Note that the OLS estimator for an auto-regressive process is not equivalent to the MLE, since the log-likelihood contains a log-determinant term that is a function of the auto-regression parameter. The MLE can be obtained via iterative methods if desired.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/539042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Derivation of integrating over many parameters in Neyman-Scott Problem? I am trying to follow the derivation for the variance estimator in the Neyman-Scott problem given in this article. However, I'm not sure how they go from the 2nd to the 3rd line of this derivation. Any help is appreciated, thanks!
| Each of the integrals indexed by $i$ in the product has the form
$$\int_{\mathbb R} \exp\left[\phi(\mu_i,x_i,y_i,\sigma)\right]\,\mathrm{d}\mu_i.$$
When $\phi$ is linear or quadratic in $\mu_i,$ such integrals have elementary values. The more difficult circumstance is the quadratic. We can succeed with such integrations merely by knowing
$$\int_\mathbb{R} \exp\left[-\frac{1}{2}z^2\right]\,\mathrm{d}z = \sqrt{2\pi}.$$
Generally, let $\theta$ denote everything that isn't $\mu_i$ ($\theta=(x_i,y_i,\sigma)$ in this instance). Writing $x=\mu_i,$ the quadratic case is when $\phi$ can be expressed in the form
$$\phi(x,\theta) = -A(\theta)^2 x^2 + B(\theta)x + C(\theta)$$
for arbitrary functions $A, B, C$ and $A(\theta)\ne 0.$ The reasons for expressing the coefficient of $x^2$ in this way are (i) to guarantee the integral exists and (ii) to avoid using square roots.
Thus, we are concerned with evaluating
$$f(\theta) = \int_\mathbb{R} \exp\left[\phi(x,\theta)\right]\,\mathrm{d}x=\int_\mathbb{R} \exp\left[-A(\theta)^2x^2 + B(\theta)x + C(\theta)\right]\,\mathrm{d}x.$$
This done by completing the square, in exactly the same way the quadratic formula is traditionally derived. The result amounts to changing the variable of integration to $z$ where
$$z = A\sqrt{2}\,x - \frac{B}{A\sqrt{2}};\quad \mathrm{d}z = A\sqrt{2}\,\mathrm{d}x.$$
In terms of $z,$
$$\exp\left[\phi(x,\theta)\right] = \exp\left[-\frac{1}{2}z^2 + C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right]= \exp\left[-\frac{1}{2}z^2\right] \exp\left[C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right].$$
The integral thereby becomes
$$\begin{aligned}
f(\theta) &= \frac{1}{A\sqrt{2}} \int_\mathbb{R} \exp\left[-\frac{1}{2}z^2\right] \exp\left[C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right]\,\mathrm{d}z \\
&= \frac{1}{A\sqrt 2} \sqrt{2\pi}\exp\left[C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right] \\
&=\frac{\sqrt \pi}{A} \exp\left[C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right].
\end{aligned}$$
Let's memorialize this for future reference:
$$\boxed{\int_\mathbb{R} \exp\left[-A(\theta)^2x^2 + B(\theta)x + C(\theta)\right]\,\mathrm{d}x = \frac{\sqrt \pi}{A} \exp\left[C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2}\right].}\tag{*}$$
To apply this to the derivation in the question, simply look at the argument of the exponential in the integrals and break it down into the form of $\phi;$ namely, as a linear combination of $\mu_i^2,$ $\mu_i,$ and a constant:
$$-\frac{(x_i-\mu_i)^2 + (y_i-\mu_i)^2}{2\sigma^2} = -\frac{1}{\sigma^2}\mu_i^2 + \frac{x_i+y_i}{\sigma^2} \mu_i + -\frac{x_i^2 + y_i^2}{2\sigma^2},$$
from which we read off
$$\cases{A = \frac{1}{\sigma} \\ B = \frac{x_i+y_i}{\sigma^2} \\ C = -\frac{x_i^2 + y_i^2}{2\sigma^2},}$$
whence
$$C(\theta) + \frac{B(\theta)^2}{4A(\theta)^2} = -\frac{x_i^2 + y_i^2}{2\sigma^2} + \left(\frac{x_i+y_i}{\sigma^2}\right)^2 \frac{\sigma^2}{4} = -\frac{(x_i-y_i)^2}{4\sigma^2}.$$
Plugging everything into the formula $(*)$ gives--also by visual inspection--
$$f(x_i,y_i,\sigma) = f(\theta) = \sigma\sqrt\pi \exp\left[ -\frac{(x_i-y_i)^2}{4\sigma^2}\right].$$
This takes us from the second line to the third line in the derivation.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/563452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Bounding the difference between square roots I want to compute the value of $\frac{1}{\sqrt{a + b + c}}$. Say I can observe a and b, but not c. Instead, I can observe d which is a good approximation for c in the sense that $P( |c-d| \leq 0.001 )$ is large (say 95%), and both c and d are known to have $|c| \leq 1, |d| \leq 1$ so a difference of 0.001 is actually small.
I want to argue that $\frac{1}{\sqrt{a + b + d}}$ is a good approximation because d is a good approximation for c. Is there anything I can say about the difference
$\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}}$?
Maybe I could say something like $P(\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}} \leq 0.001) = x?$ I'm worried that being under a square root might mess things up.
Would I need to find out the exact distribution of $c - d$, or anything else before I can make such claims?
| Use a Taylor series (or equivalently the Binomial Theorem) to expand around $c$. This is valid provided $|d-c| \lt |a+b+c|$:
$$\eqalign{
&\frac{1}{\sqrt{a+b+c}} - \frac{1}{\sqrt{a+b+d}}\\
&= (a+b+c)^{-1/2} - (a+b+c + (d-c))^{-1/2} \\
&= (a+b+c)^{-1/2} - (a+b+c)^{-1/2}\left(1 + \frac{d-c}{a+b+c}\right)^{-1/2} \\
&= (a+b+c)^{-1/2} - (a+b+c)^{-1/2}\sum_{j=1}^{\infty}\binom{-1/2}{j}\left(\frac{d-c}{a+b+c}\right) ^j\\
&= \frac{1}{2}(a+b+c)^{-3/2}(d-c) - \frac{3}{8}(a+b+c)^{-5/2}O(d-c)^2
}
$$
The difference therefore is approximately $\frac{1}{2}(a+b+c)^{-3/2}$ times $(d-c)$ and the error is (a) negative (because this is an alternating series when $d-c$ is positive and has all negative terms when $d-c$ is negative), (b) proportional to $\frac{3}{8}(a+b+c)^{-5/2}$, and (c) of second order in $d-c$. That should be enough to complete your analysis. (This leads essentially to the delta method.)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/20409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the pdf of Y when pdf of X is given $$
f_{X}(x) = \frac{3}{8}(x+1)^{2} ,\ -1 < x < 1
$$
$$Y = \begin{cases} 1 - X^{2} & X \leq 0,\\
1- X, & X > 0.\end{cases}$$
I started with :
$$
F_{Y}(y) = 1 - P(Y \leq y)
$$
$$
= 1 - [P(-(1-y)^\frac {1}{2} < X < (1-y)]
$$
From here, I can get $F_{Y}(y)$, and differentiating it will give me $f_{x}(x)$.
But the answer I am getting for pdf is not the desired answer. Am I doing anything wrong?
Thanks for your help.
| The probability density function of $Y$ can be found by:
$$
f_{Y}(y)=\sum_{i}f_{X}(g_{i}^{-1}(y))\left|\frac{dg_{i}^{-1}(y)}{dy}\right|,\quad \mathrm{for}\; y \in \mathcal{S}_{Y}
$$
where $g_{i}^{-1}$ denotes the inverse of the transformation function and $\mathcal{S}_{Y}$ the support of $Y$. Let's denote our two transformation functions
$$
\begin{align}
g_{1}(X) &= 1-X^{2}, & X\leq 0\\
g_{2}(X) &= 1-X, & X>0\\
\end{align}
$$
The support of $Y$ is the set $\mathcal{S}_{Y}=\{y=g(x):x\in\mathcal{S}_{X}\}$ where $\mathcal{S}_{X}$ denotes the support set of $X$. Hence, the support of $Y$ is $y \in (0,1]$. Further, we need the inverse transformations $g_{1}^{-1}(y)$ and $g_{2}^{-1}(y)$. They are given by:
$$
\begin{align}
g_{1}^{-1}(y) &= -\sqrt{1-y}\\
g_{2}^{-1}(y) &= 1-y \\
\end{align}
$$
In the first inverse, we need only the negative signed function because $x\leq 0$. The derivatives are:
$$
\begin{align}
\left|\frac{dg_{1}^{-1}(y)}{dy} g_{1}^{-1}(y)\right| &=\frac{1}{2\sqrt{1-y}}\\
\left|\frac{dg_{2}^{-1}(y)}{dy} g_{2}^{-1}(y)\right| &= \left|-1\right| = 1 \\
\end{align}
$$
So the PDF of $Y$ is given by:
$$
\begin{align}
f_{Y}(y) &= f_{X}(-\sqrt{1-y})\cdot \frac{1}{2\sqrt{1-y}} + f_{X}(1-y)\cdot 1 \\
&= \frac{3}{8}(1-\sqrt{1-y})^{2}\cdot \frac{1}{2\sqrt{1-y}} + \frac{3}{8}(2-y)^{2}\cdot 1 \\
&= \begin{cases}
\frac{3}{16}\left(6+\frac{2}{\sqrt{1-y}}+y\cdot\left(2y-\frac{1}{\sqrt{1-y}}-8\right)\right), & 0 < y \leq 1\\
0, &\mathrm{otherwise}
\end{cases}
\end{align}
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/70709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
If f(x) is given, what would be the distribution of Y = 2X + 1?
If the random variable X has a continuous distribution with the
density $ f(x) = \frac{1}{x^2}\Bbb {1}_{(1, ∞)}(x)$, can you find the
distribution of $Y = 2X+1$?
My attempt:
$CDF(Y)$
$\Rightarrow P(Y\le y)$
$\Rightarrow P(2X + 1 \le y)$
$\Rightarrow P(X \le \frac{y-1}{2})$
$\Rightarrow \int_{1}^{(y-1)/2} f(x) dx$
$\Rightarrow \int_{1}^{(y-1)/2} \frac{1}{x^2} \Bbb 1_{(1, \infty)}(x) dx$
$\Rightarrow \int_{1}^{(y-1)/2} \frac{1}{x^2} dx$
$\Rightarrow \frac{1}{-2+1}[x^{-2+1}]_1^{(y-1)/2}$
$\Rightarrow 1 - \frac{2}{y-1}$ Ans.
Is it a correct approach?
| Since the transformation function is monotonic, we can find the CDF by using PDF transformation and integrating the transformed PDF.
PDF Transformation:
$$ f_Y(y) = f_X(g^{-1}(y)) \Bigg|\frac{dg^{-1}}{dy} \Bigg|$$
For this situation, $g^{-1}(y) = \frac{y-1}{2}$, and by substitution:
$$f_X(g^{-1}(y)) = \frac{1}{((y-1)/2)^2} = \frac{4}{(y-1)^2}$$
The absolute value of the derivative of $g^{-1}(y)$ with respect to $y$ is easy:
\begin{align}
\Bigg|\frac{dg^{-1}}{dy} \Bigg| = \frac{1}{2}
\end{align}
Plug components into the PDF Transformation formula above to get a transformed PDF of :
\begin{align}
f_Y(y) = \frac{2}{(y-1)^2} \quad \text{for} \quad 3 \leq y \lt \infty
\end{align}
Remembering to use transformed lower and upper bounds, we integrate to get the CDF. Line 3 employs u-substitution to simplify the integration:
\begin{align}
F_Y(y) &=\int_{3}^{y} f_Y(t) \, dt\\
&=\int_{3}^{y} \frac{2}{(t-1)^2} \, dt\\
&=\int_{2}^{y-1} \frac{2}{u^2} \, du\\
&= -\frac{2}{u} \; \Bigg|_2^{y-1}\\
&= -\frac{2}{y-1} - (-\frac{2}{2})\\
&= 1 - \frac{2}{y-1} \quad \text{for} \quad 3 \leq y \lt \infty
\end{align}
Finally, we'll check the lower and upper bounds just to make sure we've fulfilled CDF validity requirements:
$$F_Y(3) = 1 - \frac{2}{2} = 0$$
$$F_Y(\infty) = 1 - \frac{2}{\infty} = 1$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/245341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables.
The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$
Transforming to polar coordinates $(X,Y)\to(Z,\Theta)$ such that $$X=Z\cos\Theta\qquad\text{ and }\qquad Y=Z\sin\Theta$$
So, $z=\sqrt{x^2+y^2}$ and $0< x,y<1\implies0< z<\sqrt 2$.
When $0< z<1$, we have $0< \cos\theta<1,\,0<\sin\theta<1$ so that $0<\theta<\frac{\pi}{2}$.
When $1< z<\sqrt 2$, we have $z\cos\theta<\implies\theta>\cos^{-1}\left(\frac{1}{z}\right)$, as $\cos\theta$ is decreasing on $\theta\in\left[0,\frac{\pi}{2}\right]$; and $z\sin\theta<1\implies\theta<\sin^{-1}\left(\frac{1}{z}\right)$, as $\sin\theta$ is increasing on $\theta\in\left[0,\frac{\pi}{2}\right]$.
So, for $1< z<\sqrt 2$, we have $\cos^{-1}\left(\frac{1}{z}\right)<\theta<\sin^{-1}\left(\frac{1}{z}\right)$.
The absolute value of jacobian of transformation is $$|J|=z$$
Thus the joint density of $(Z,\Theta)$ is given by
$$f_{Z,\Theta}(z,\theta)=z\mathbf 1_{\{z\in(0,1),\,\theta\in\left(0,\pi/2\right)\}\bigcup\{z\in(1,\sqrt2),\,\theta\in\left(\cos^{-1}\left(1/z\right),\sin^{-1}\left(1/z\right)\right)\}}$$
Integrating out $\theta$, we obtain the pdf of $Z$ as
$$f_Z(z)=\frac{\pi z}{2}\mathbf 1_{0<z<1}+\left(\frac{\pi z}{2}-2z\cos^{-1}\left(\frac{1}{z}\right)\right)\mathbf 1_{1<z<\sqrt 2}$$
Is my reasoning above correct? In any case, I would like to avoid this method and instead try to find the cdf of $Z$ directly. But I couldn't find the desired areas while evaluating $\mathrm{Pr}(Y\le \sqrt{z^2-X^2})$ geometrically.
EDIT.
I tried finding the distribution function of $Z$ as
\begin{align}
F_Z(z)&=\Pr(Z\le z)
\\&=\Pr(X^2+Y^2\le z^2)
\\&=\iint_{x^2+y^2\le z^2}\mathbf1_{0<x,y<1}\,\mathrm{d}x\,\mathrm{d}y
\end{align}
Mathematica says this should reduce to
$$F_Z(z)=\begin{cases}0 &,\text{ if }z<0\\ \frac{\pi z^2}{4} &,\text{ if } 0< z<1\\ \sqrt{z^2-1}+\frac{z^2}{2}\left(\sin^{-1}\left(\frac{1}{z}\right)-\sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right)\right) &,\text{ if }1< z<\sqrt 2\\ 1 &,\text{ if }z>\sqrt 2 \end{cases}$$
which looks like the correct expression. Differentiating $F_Z$ for the case $1< z<\sqrt 2$ though brings up an expression which doesn't readily simplify to the pdf I already obtained.
Finally, I think I have the correct pictures for the CDF:
For $0<z<1$ :
And for $1<z<\sqrt 2$ :
Shaded portions are supposed to indicate the area of the region $$\left\{(x,y):0<x,y< 1\,,\,x^2+y^2\le z^2\right\}$$
The picture immediately yields
\begin{align}
F_Z(z)&=\Pr\left(-\sqrt{z^2-X^2}\le Y\le\sqrt{z^2-X^2}\right)
\\&=\begin{cases}\frac{\pi z^2}{4} &,\text{ if } 0<z<1\\\\ \sqrt{z^2-1}+\int_{\sqrt{z^2-1}}^1 \sqrt{z^2-x^2}\,\mathrm{d}x &,\text{ if }1< z<\sqrt 2 \end{cases}
\end{align}
, as I had previously found.
|
$f_z(z)$ :
So, for $1\le z<\sqrt 2$, we have
$\cos^{-1}\left(\frac{1}{z}\right)\le\theta\le\sin^{-1}\left(\frac{1}{z}\right)$
You can simplify your expressions when you use symmetry and evaluate the expressions for $\theta_{min} < \theta < \frac{\pi}{4}$. Thus, for half of the space and then double the result.
Then you get:
$$P(Z \leq r) = 2 \int_0^r z \left(\int_{\theta_{min}}^{\frac{\pi}{4}}d\theta\right) dz = \int_0^r z \left(\frac{\pi}{2}-2\theta_{min}\right) dz$$
and your $f_z(z)$ is
$$f_z(z) = z \left(\frac{\pi}{2}-2\theta_{min}\right) = \begin{cases} z\left(\frac{\pi}{2}\right) & \text{ if } 0 \leq z \leq 1 \\ z \left(\frac{\pi}{2} - 2 \cos^{-1}\left(\frac{1}{z}\right)\right) & \text{ if } 1 < z \leq \sqrt{2} \end{cases}$$
$F_z(z)$ :
You can use the indefinite integral:
$$\int z \cos^{-1}\left(\frac{1}{z}\right) = \frac{1}{2} z \left( z \cos^{-1}\left(\frac{1}{z}\right) - \sqrt{1-\frac{1}{z^2}} \right) + C $$
note $\frac{d}{du} \cos^{-1}(u) = - (1-u^2)^{-0.5}$
This leads straightforward to something similar as Xi'ans expression for $Pr(Z \leq z)$ namely
if $1 \leq z \leq \sqrt{2}$ then:
$$F_z(z) = {z^2} \left(\frac{\pi}{4}-\cos^{-1}\left(\frac{1}{z}\right) + z^{-1}\sqrt{1-\frac{1}{z^2}} \right)$$
The relation with your expression is seen when we split up the $cos^{-1}$ into two $cos^{-1}$ expressions, and then converted to different $sin^{-1}$ expressions.
for $z>1$ we have
$$\cos^{-1}\left(\frac{1}{z}\right) = \sin^{-1}\left(\sqrt{1-\frac{1}{z^2}}\right) = \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) $$
and
$$\cos^{-1}\left(\frac{1}{z}\right) = \frac{\pi}{2} -\sin^{-1}\left(\frac{1}{z}\right) $$
so
$$\begin{array}\\
\cos^{-1}\left(\frac{1}{z}\right) & = 0.5 \cos^{-1}\left(\frac{1}{z}\right) + 0.5 \cos^{-1}\left(\frac{1}{z}\right) \\
& = \frac{\pi}{4} - 0.5 \sin^{-1}\left(\frac{1}{z}\right) + 0.5 \sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right) \end{array} $$
which results in your expression when you plug this into the before mentioned $F_z(z)$ for $1<z<\sqrt{2}$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/323617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
State-space model with contemporaneous effects I have the following system of equations:
$$
\begin{align}
y_t^{(1)}&=y_t^{(2)}-x_t+\epsilon_t\\
y_t^{(2)}&=x_t+\nu_t\\
x_t&=\alpha x_{t-1}+u_t
\end{align}
$$
where $y_t^{(1)}, y_t^{(2)}$ are observed and $x_t$ is not.
I'm having some issues putting this into the state-space formulation. The issue I have is that in order to get $y_t^{(2)}$ in the (measurement) equation for $y_t^{(1)}$ I need to put it in the state vector. But I need $y_t^{(2)}$ in the measurement equation in order to get $x_t$ in there. So what do I do with $y_t^{(2)}$ in the state vector? Can I simply skip that row in the state equation? That would give me:
$$
\begin{align}
\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}-1 & 1 \\
0 & 1\end{pmatrix}\begin{pmatrix}x_t \\ y_t^{(2)}\end{pmatrix}+\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}\\
x_t&=\begin{pmatrix}\alpha & 0 \end{pmatrix}\begin{pmatrix}x_{t-1} \\ y_{t-1}^{(2)}\end{pmatrix}+u_t.
\end{align}
$$
| Substitute the second equation into the first and you have
\begin{align}
y_t^{(1)}&=y_t^{(2)}-x_t+\epsilon_t\\
&=x_t+\nu_t-x_t+\epsilon_t\\
&=\nu_t+\epsilon_t.
\end{align}
So it's
\begin{align}
\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}0 \\
1\end{pmatrix}x_t+\begin{pmatrix}1 & 1 \\ 0 & 1\end{pmatrix}\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}\\
x_t&=\alpha x_{t-1}+u_t.
\end{align}
Another way to see it is to rewrite the measurement equations as
\begin{align}
y_t^{(1)}-y_t^{(2)}&=-x_t+\epsilon_t\\
y_t^{(2)}&=x_t+\nu_t,
\end{align}
which is equivalent to
$$\begin{pmatrix}1 & -1 \\ 0 & 1\end{pmatrix}\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}=\begin{pmatrix}-1 \\
1\end{pmatrix}x_t+\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}.$$
To isolate the observables on the left hand side, note that
$$\begin{pmatrix}1&-1\\0&1\end{pmatrix}^{-1}=\begin{pmatrix}1&1\\0&1\end{pmatrix}.$$
Multiply both sides by that and you get
\begin{align}
\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}1 & -1 \\ 0 & 1\end{pmatrix}\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}1&1\\0&1\end{pmatrix}\left[\begin{pmatrix}-1 \\
1\end{pmatrix}x_t+\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}\right]\\
\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}-1 \\
1\end{pmatrix}x_t+\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}\\
\begin{pmatrix}y_t^{(1)}\\ y_t^{(2)}\end{pmatrix}&=\begin{pmatrix}0 \\
1\end{pmatrix}x_t+\begin{pmatrix}1&1\\0&1\end{pmatrix}\begin{pmatrix}\epsilon_t\\\nu_t\end{pmatrix}.
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/373080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understanding KL divergence between two univariate Gaussian distributions I'm trying to understand KL divergence from this post on SE. I am following @ocram's answer, I understand the following :
$\int \left[\log( p(x)) - log( q(x)) \right] p(x) dx$
$=\int \left[ -\frac{1}{2} \log(2\pi) - \log(\sigma_1) - \frac{1}{2} \left(\frac{x-\mu_1}{\sigma_1}\right)^2 + \frac{1}{2}\log(2\pi) + \log(\sigma_2) + \frac{1}{2} \left(\frac{x-\mu_2}{\sigma_2}\right)^2 \right] \times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx$
$=\int \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right] \right\} \times \frac{1}{\sqrt{2\pi}\sigma_1} \exp\left[-\frac{1}{2}\left(\frac{x-\mu_1}{\sigma_1}\right)^2\right] dx$
But not the following:
$=E_{1} \left\{\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2} \left[ \left(\frac{x-\mu_2}{\sigma_2}\right)^2 - \left(\frac{x-\mu_1}{\sigma_1}\right)^2 \right]\right\}$
$=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2\sigma_1^2} E_1 \left\{(X-\mu_1)^2\right\}$
$=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2} E_1 \left\{(X-\mu_2)^2\right\} - \frac{1}{2}$
Now noting :
$(X - \mu_2)^2 = (X-\mu_1+\mu_1-\mu_2)^2 = (X-\mu_1)^2 + 2(X-\mu_1)(\mu_1-\mu_2) + (\mu_1-\mu_2)^2$
$=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{1}{2\sigma_2^2}
\left[E_1\left\{(X-\mu_1)^2\right\} + 2(\mu_1-\mu_2)E_1\left\{X-\mu_1\right\} + (\mu_1-\mu_2)^2\right] - \frac{1}{2}$
$=\log\left(\frac{\sigma_2}{\sigma_1}\right) + \frac{\sigma_1^2 + (\mu_1-\mu_2)^2}{2\sigma_2^2} - \frac{1}{2}$
First off what is $E_1$?
| $E_1$ is the expectation with respect to the first distribution $p(x)$. Denoting it with $E_p$ would be better, I think. – Monotros
I've created this answer from a comment so that this question is answered. Better to have a short answer than no answer at
all.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/406221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sample standard deviation is a biased estimator: Details in calculating the bias of $s$ In this post Why is sample standard deviation a biased estimator of $\sigma$?
the last step is shown as:
$$\sigma\left(1-\sqrt\frac{2}{n-1}\frac{\Gamma\frac{n}{2}}{\Gamma\frac{n-1}{2}}\right)
= \sigma\left(1-\sqrt\frac{2}{n-1}\frac{((n/2)-1)!}{((n-1)/2-1)!}\right)$$
How is this equal to $\frac{\sigma}{4n}$?
| Making the substitution $x = \frac{n}{2}-1$, you essentially want to control
$$1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}$$
as $x \to \infty$.
Gautschi's inequality (applied with $s=\frac{1}{2}$) implies
$$
1 - \sqrt{\frac{x+1}{x+\frac{1}{2}}}
<1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}
< 1 - \sqrt{\frac{x}{x+\frac{1}{2}}}$$
The upper and lower bounds can be rearranged as
$$
\left|1 - \frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}}\right|
< \frac{1}{2x+1} \cdot \frac{1}{1 + \sqrt{1 - \frac{1}{2x+1}}}
\approx \frac{1}{2(2x+1)}.$$
Plugging in $x=\frac{n}{2}-1$ gives a bound of $\frac{1}{2(n-1)}$. This is weaker than the author's claim of asymptotic equivalence with $\frac{1}{4n}$, but at least it is of the same order.
Responses to comments:
When $x=\frac{n}{2}-1$ you have $x+1 = \frac{n}{2}$ and $x + \frac{1}{2} = \frac{n}{2} - 1 + \frac{1}{2} = \frac{n}{2} - \frac{1}{2} = \frac{n-1}{2}$. So $\frac{\Gamma(x+1)}{\Gamma(x+\frac{1}{2}) \sqrt{x + \frac{1}{2}}} = \frac{\Gamma(n/2)}{\Gamma((n-1)/2) \sqrt{(n-1)/2}}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/494489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Calculate the consistency of an Estimator I need to determine whether the following estimator $T$ is asymptotically unbiased and consistent for an i.i.d. sample of Gaussian distributions with $X_{i} \sim N(\mu, \sigma)$:
\begin{equation*}
T = \frac{1}{2} X_{1} + \frac{1}{2n} \sum\limits_{i = 2}^{n} X_{i}
\end{equation*}
Keep in mind that $n$ denotes the sample size.
I was able to figure out that the estimator $T$ is asymptotically unbiased.
First, I determined the expected value of the estimator.
\begin{align*}
E[T] &= \frac{1}{2} \cdot E[X_{1}] + \frac{1}{2 \cdot n} \sum\limits_{i = 1}^{n} E[X_{i}] \\
&= \frac{E[X]}{2} + \frac{1}{2 \cdot n} \cdot (n - 1) \cdot E[X] \\
&= \frac{E[X]}{2} + \frac{(n - 1)}{2 \cdot n} \cdot E[X] \\
&= \frac{\mu}{2} + \frac{(n - 1) \cdot \mu}{2 \cdot n} \\
&= \frac{\mu}{2} + \frac{n \cdot \mu - \mu}{2 \cdot n} \\
\end{align*}
Since the expected value does not equal $\mu$, one can conclude that the estimator $T$ is biased.
However, if we calculate the estimator's bias $b(T)$ and check if it converges to 0, we can see that the estimator is asymptotically unbiased (the calculation of the limit was done using WolframAlpha):
\begin{equation*}
b(T) = E[T] - \mu = \left(\frac{\mu}{2} + \frac{n \cdot \mu - \mu}{2 \cdot n} \right) - \mu
\end{equation*}
\begin{equation*}
\text{lim}_{n \rightarrow +\infty}\left(\frac{\mu}{2} + \frac{n \cdot \mu - \mu}{2 \cdot n} - \mu \right) = 0
\end{equation*}
Unfortunately, I have not been able to find out whether the estimator $T$ is consistent.
From my understanding we can find out if a biased estimator is consistent by verifying if the mean squared error $MSE$ of the error approaches 0 when the sample size $n$ gets infinitely large.
In order to calculate the $MSE$, we need to calculate the variance $VAR$ of the estimator and then subtract the square of the bias $b$ from the variance $VAR$:
\begin{equation*}
\text{MSE}(T) = \text{VAR}(T) - b^{2}(T)
\end{equation*}
\begin{equation*}
\text{lim}_{n \rightarrow +\infty}\left(\text{MSE}(T)\right) = 0 \Rightarrow T \text{ is consistent}
\end{equation*}
The issue is that I am not able to correctly calculate the MSE.
I tried many approaches, but I could not figure out what's wrong.
My current approach is the following:
\begin{align*}
\text{VAR}(T) &= \frac{1}{2^{2}} \cdot \text{VAR}(X_{1}) + \frac{1}{(2 \cdot n)^{2}} \sum\limits_{i = 1}^{n} \text{VAR}(X_{i}) \\
&= \frac{1}{2^{2}} \cdot \text{VAR}(X) + \frac{1}{(2 \cdot n)^{2}} \cdot (n - 1) \cdot \text{VAR}(X) \\
&= \frac{\sigma^{2}}{4} + \frac{(n - 1) \cdot \sigma^{2}}{4 \cdot n^{2}} \\
&= \frac{\sigma^{2}}{4} + \frac{n \sigma^{2} - \sigma^{2}}{4 \cdot n^{2}} \\
&= \frac{n^{2} \cdot \sigma^{2}}{4 \cdot n^{2}} + \frac{n \sigma^{2} - \sigma^{2}}{4 \cdot n^{2}} \\
&= \frac{n^{2} \cdot \sigma^{2} + n \sigma^{2} - \sigma^{2}}{4 \cdot n^{2}} \\
\end{align*}
The particular issue lies in finding the value for the square of the bias $b^{2}(T)$.
I tried many different approaches, but I could not find an equation which makes my calculation work.
Therefore, my issue is how can I find a sensible equation for $b^{2}(T)$?
Just for reference, here is my current approach:
See WolframAlpha for the expansion of the bias
\begin{align*}
b^{2}(T) &= \left( \frac{\sigma}{2} + \frac{n \cdot \sigma - \sigma}{2 \cdot n} - \sigma \right)^{2} \\
&= \frac{\sigma^{2}}{4} \\
\end{align*}
\begin{align*}
\text{MSE}(T) &= \frac{n^{2} \cdot \sigma^{2} + n \sigma^{2} - \sigma^{2}}{4 \cdot n^{2}} - \frac{\sigma^{2}}{4}\\
&= \frac{n^{2} \cdot \sigma^{2} + n \sigma^{2} - 2 \cdot \sigma^{2}}{4 \cdot n^{2}}
\end{align*}
\begin{equation*}
\text{lim}_{n \rightarrow +\infty} \left( \frac{n^{2} \cdot \sigma^{2} + n \sigma^{2} - 2 \cdot \sigma^{2}}{4 \cdot n^{2}} \right) = \frac{\sigma^{2}}{4}
\end{equation*}
Thank you for your help! Grazie mille!
| By definition, a consistent estimator converges in probability to a constant as the sample grows larger.
To be explicit, let's subscript $T$ with the sample size. Note that
$$\operatorname{Var}(T_n) = \operatorname{Var}\left(\frac{X_1}{2}\right) + \operatorname{Var}\left(\frac{1}{2n}\sum_{i=2}^n X_i\right) \ge \operatorname{Var}\left(\frac{X_1}{2}\right) = \frac{\sigma^2}{4}.$$
Because $T_n,$ being a linear combination of independent Normal variables, has a Normal distribution, it cannot possibly converge to a constant and therefore is not consistent.
One quick rigorous proof is to suppose it does converge in probability to a number $\theta$ and then observe that $\Pr(|T_n-\theta|\ge \sigma) \ge \Phi(1)-\Phi(-1) \gt 0$ (where $\Phi$ is the standard Normal distribution function), demonstrating that it does in fact not converge.
(If you're unfamiliar with this inequality, use Calculus to minimize the function $\theta\to \Pr(|Z-\theta|\ge 1)$ (for a standard normal variable $Z$) by finding the zeros of its derivative. You will discover the finite critical points occur where the densities at $\theta\pm 1$ are equal, immediately giving $\theta=0.$)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/495867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Fourth moment of arch(1) process I have an ARCH(1) process
\begin{align*}
Y_t &= \sigma_t \epsilon_t, \\
\sigma_t^2 &= \omega + \alpha Y_{t-1}^2,
\end{align*}
and I am trying to express the fourth moment $\mathbb{E}[Y_t^4]$ in terms of $\omega$, $\alpha$ and $\mathbb{E}[\epsilon_t^4]$.
| For \begin{align*}
Y_t = \sigma_t \epsilon_t, \qquad \sigma^2_t = \omega + \alpha Y^2_{t-1}, \qquad \omega>0, \alpha \geq 0,
\end{align*}
we assume $\sigma_t$ and $\epsilon_t$ to be independent. I also assume standard normality for $\epsilon_t$, so that $E(\epsilon_t^4)=3$. (You will see from the proof what needs to happen for convergence when the fourth moment is different.)
Consider a recursion for the 4th moment.
\begin{align*}
E[Y^4_t] &= E[\sigma^4_t \epsilon^4_t] = E[\sigma^4_t] E[\epsilon^4_t] \\
&= 3 E[\sigma^4_t] = 3 E[(w + \alpha Y^2_{t-1})^2] \\
&= 3 E[\omega^2 + 2\omega \alpha Y^2_{t-1} + \alpha^2 Y^4_{t-1}] \\
&= 3 \omega^2 + 6 \omega \alpha E[Y^2_{t-1}] + 3 \alpha^2 E[Y^4_{t-1}] \\
&= \underbrace{ 3 \omega^2 + \frac{6 \omega^2 \alpha}{1 - \alpha}}_{=:c} + 3 \alpha^2 E[Y^4_{t-1}], \\
\end{align*}
where the last line uses results for the variance of an ARCH(1)-process.
Repeated substitution yields
\begin{align*}
E[Y^4_t] &= c + 3 \alpha^2 E[Y^4_{t-1}] \\
&= c + 3 \alpha^2 (c + 3 \alpha^2 E[Y^4_{t-2}]) \\
&= c + 3 \alpha^2 c + (3 \alpha^2)^2 E[Y^4_{t-2}] \\
&= c + 3 \alpha^2c + (3 \alpha^2)^2 (c + 3 \alpha^2 E[Y^4_{t-3}]) \\
&= c + 3 \alpha^2 c + (3 \alpha^2)^2 c + (3 \alpha^2)^3 E[Y^4_{t-3}]\\
& \qquad \qquad \qquad \qquad \vdots \\
&= c \sum^n_{i=0} (3 \alpha^2)^i + (3 \alpha^2)^{n+1} E[Y^4_{t-(n+1)}] \\
\end{align*}
For $E[Y^4_t]$ to be finite we hence need $3 \alpha^2 < 1$. In this case, we obtain
\begin{align*}
E[Y^4_t] &= c \sum^\infty_{i=0} (3 \alpha^2)^i \quad\overset{x:=3 \alpha^2}{=} c \sum^\infty_{i=0} x^i= \frac{c}{1 - x} \\
&= \frac{c}{1 - 3 \alpha^2} \\
& = \frac{3 w^2 (1 + \alpha)}{(1 - \alpha) (1 - 3 \alpha^2)}. \\
\end{align*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/550022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Bounding the difference between square roots I want to compute the value of $\frac{1}{\sqrt{a + b + c}}$. Say I can observe a and b, but not c. Instead, I can observe d which is a good approximation for c in the sense that $P( |c-d| \leq 0.001 )$ is large (say 95%), and both c and d are known to have $|c| \leq 1, |d| \leq 1$ so a difference of 0.001 is actually small.
I want to argue that $\frac{1}{\sqrt{a + b + d}}$ is a good approximation because d is a good approximation for c. Is there anything I can say about the difference
$\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}}$?
Maybe I could say something like $P(\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}} \leq 0.001) = x?$ I'm worried that being under a square root might mess things up.
Would I need to find out the exact distribution of $c - d$, or anything else before I can make such claims?
| $\frac{1}{\sqrt{a + b + c}} - \frac{1}{\sqrt{a + b + d}} = \frac{\sqrt{a + b + d}-\sqrt{a + b + c}}{\sqrt{a + b + c}\sqrt{a + b + d}} $
$ =\frac{(\sqrt{a + b + d}+\sqrt{a + b + c})(\sqrt{a + b + d}-\sqrt{a + b + c})}{(\sqrt{a + b + d}+\sqrt{a + b + c})\sqrt{a + b + c}\sqrt{a + b + d}}$
$ =\frac{(d-c)}{(\sqrt{a + b + d}+\sqrt{a + b + c})\sqrt{a + b + c}\sqrt{a + b + d}}$
Denominator is positive. This difference is smaller when denominator is greater than 1.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/20409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Decision Tree Probability - With Back Step For the below decision tree, I can see how the probabilities of each end state are calculated... simply multiply the previous decisions:
But for this one below, I'm totally stumped. It seems in my head the chance at resetting back to the first decision is completely negated because essentially the whole decision restarts like the first decision was never made. But based on the end probabilities this gives the s3 a larger chance at being chosen?
What does the math behind this look like? How are those final probabilities calculated given the reset in the decision tree?
| The probability of arriving at S1 is
$$\frac 12\cdot \frac 29
+ \frac 12\cdot \left(\frac 49\cdot \frac 12\right)\cdot\frac 29
+\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^2\cdot\frac 29
+\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^3\cdot\frac 29
+ \cdots\\
= \frac 19\cdot \left[1 + \left(\frac 29\right)
+ \left(\frac 29\right)^2 + \left(\frac 29\right)^3 + \cdots \right]\\
= \frac 19 \cdot \frac{1}{1-\frac 29} = \frac 17 = \frac{2}{14}.$$
A similar calculation (replacing the trailing $\displaystyle\frac 29$'s
by
$\displaystyle\frac 39$'s gives the probability of arriving at S2 as
$\displaystyle \frac{3}{14}$.
At this point, we can jump to the conclusion that the probability of
arriving at S3 must be $\displaystyle\frac{9}{14}$ without sullying our hands with
more summations of geometric series, but more skeptical folks can work with
$$\frac 12
+ \frac 12\cdot \left(\frac 49\cdot \frac 12\right)
+\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^2
+\frac 12\cdot \left(\frac 49\cdot \frac 12\right)^3
+ \cdots\\$$
which looks a lot like the sum on the second line of this
answer except for those trailing $\displaystyle \frac 29$'s, and so
we get that the probability of arriving at S3 is $\displaystyle\frac 92$
times the probability of arriving at S2, which gives us
$\displaystyle \frac 92\cdot \frac{2}{14} = \frac{9}{14}$.
Look, Ma! No more summations of geometric series!
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/242996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cdf of the joint density $f(x, y) = \frac{3}{2 \pi} \sqrt{1-x^2-y^2}$ $$f(x, y) = \frac{3}{2\pi} \sqrt{1-x^2-y^2}, \quad x^2 + y^2 \leq 1$$
Find the cdf $F(x, y)$. To do this, we need to compute the integral
$$ \int_{-1}^{x} \int_{-1}^{y} \frac{3}{2\pi} \sqrt{1-u^2-v^2} dv du .$$
This is where I'm stuck. Converting to polar coordinates wouldn't seem to help since the area to be integrated isn't a circle.
Edit: Adjusting the limits of integration to
$$\int_{-1}^{x} \int_{-\sqrt{1-x^2}}^{y} \frac{3}{2\pi} \sqrt{1-u^2-v^2} dv du,$$
I tried using the substitution
$$v=\sqrt{1−u^2}\sin{\theta}, \quad dv=\sqrt{1−u^2}cos{\theta}d\theta$$
Then $\theta = \arcsin(\frac{v}{\sqrt{1-u^2}})$, giving
$$\int_{-1}^{x} \int_{\arcsin(-\frac{\sqrt{1-x^2}}{\sqrt{1-u^2}})}^{\arcsin(\frac{y}{\sqrt{1-u^2}})} \frac{3}{2\pi} \sqrt{1-u^2-(\sqrt{1−u^2}\sin{\theta})^2} \sqrt{1−u^2}cos{\theta}d\theta du$$
$$=\int_{-1}^{x} \frac{3}{2\pi} (1−u^2) \int_{\arcsin(-\frac{\sqrt{1-x^2}}{\sqrt{1-u^2}})}^{\arcsin(\frac{y}{\sqrt{1-u^2}})} cos^2{\theta}d\theta du$$
$$=\int_{-1}^{x} \frac{3}{2\pi} (1−u^2) \int_{\arcsin(-\frac{\sqrt{1-x^2}}{\sqrt{1-u^2}})}^{\arcsin(\frac{y}{\sqrt{1-u^2}})} \frac{1}{2}[1+cos(2\theta)]d\theta du$$
$$=\int_{-1}^{x} \frac{3}{4\pi} (1−u^2) [\theta+\frac{1}{2}\sin(2\theta)]_{\arcsin(-\frac{\sqrt{1-x^2}}{\sqrt{1-u^2}})}^{\arcsin(\frac{y}{\sqrt{1-u^2}})} du$$
At this point the integral got really messy to deal with. Is there another way to approach this?
| This problem will be easier to solve if you first try and visualize what the joint pdf looks as a surface above the $x$-$y$ plane in three-dimensional space. Hint: ignoring the scale factor $\frac{3}{2\pi}$, what is the surface defined by
$$z = \begin{cases}\sqrt{1 - x^2 - y^2}, & x^2+y^2 \leq 1,\\
0, &\text{otherwise}, \end{cases}~~~~?$$ Can you think of ways that you might be able to compute the volume between this surface and the $x$-$y$ plane in the region $\{(x,y) \colon x\leq a, y \leq b\}$. The visualization will also help you set the lower and upper limits on the integrals as suggested by JarleTufto.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/249345",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables.
The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$
Transforming to polar coordinates $(X,Y)\to(Z,\Theta)$ such that $$X=Z\cos\Theta\qquad\text{ and }\qquad Y=Z\sin\Theta$$
So, $z=\sqrt{x^2+y^2}$ and $0< x,y<1\implies0< z<\sqrt 2$.
When $0< z<1$, we have $0< \cos\theta<1,\,0<\sin\theta<1$ so that $0<\theta<\frac{\pi}{2}$.
When $1< z<\sqrt 2$, we have $z\cos\theta<\implies\theta>\cos^{-1}\left(\frac{1}{z}\right)$, as $\cos\theta$ is decreasing on $\theta\in\left[0,\frac{\pi}{2}\right]$; and $z\sin\theta<1\implies\theta<\sin^{-1}\left(\frac{1}{z}\right)$, as $\sin\theta$ is increasing on $\theta\in\left[0,\frac{\pi}{2}\right]$.
So, for $1< z<\sqrt 2$, we have $\cos^{-1}\left(\frac{1}{z}\right)<\theta<\sin^{-1}\left(\frac{1}{z}\right)$.
The absolute value of jacobian of transformation is $$|J|=z$$
Thus the joint density of $(Z,\Theta)$ is given by
$$f_{Z,\Theta}(z,\theta)=z\mathbf 1_{\{z\in(0,1),\,\theta\in\left(0,\pi/2\right)\}\bigcup\{z\in(1,\sqrt2),\,\theta\in\left(\cos^{-1}\left(1/z\right),\sin^{-1}\left(1/z\right)\right)\}}$$
Integrating out $\theta$, we obtain the pdf of $Z$ as
$$f_Z(z)=\frac{\pi z}{2}\mathbf 1_{0<z<1}+\left(\frac{\pi z}{2}-2z\cos^{-1}\left(\frac{1}{z}\right)\right)\mathbf 1_{1<z<\sqrt 2}$$
Is my reasoning above correct? In any case, I would like to avoid this method and instead try to find the cdf of $Z$ directly. But I couldn't find the desired areas while evaluating $\mathrm{Pr}(Y\le \sqrt{z^2-X^2})$ geometrically.
EDIT.
I tried finding the distribution function of $Z$ as
\begin{align}
F_Z(z)&=\Pr(Z\le z)
\\&=\Pr(X^2+Y^2\le z^2)
\\&=\iint_{x^2+y^2\le z^2}\mathbf1_{0<x,y<1}\,\mathrm{d}x\,\mathrm{d}y
\end{align}
Mathematica says this should reduce to
$$F_Z(z)=\begin{cases}0 &,\text{ if }z<0\\ \frac{\pi z^2}{4} &,\text{ if } 0< z<1\\ \sqrt{z^2-1}+\frac{z^2}{2}\left(\sin^{-1}\left(\frac{1}{z}\right)-\sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right)\right) &,\text{ if }1< z<\sqrt 2\\ 1 &,\text{ if }z>\sqrt 2 \end{cases}$$
which looks like the correct expression. Differentiating $F_Z$ for the case $1< z<\sqrt 2$ though brings up an expression which doesn't readily simplify to the pdf I already obtained.
Finally, I think I have the correct pictures for the CDF:
For $0<z<1$ :
And for $1<z<\sqrt 2$ :
Shaded portions are supposed to indicate the area of the region $$\left\{(x,y):0<x,y< 1\,,\,x^2+y^2\le z^2\right\}$$
The picture immediately yields
\begin{align}
F_Z(z)&=\Pr\left(-\sqrt{z^2-X^2}\le Y\le\sqrt{z^2-X^2}\right)
\\&=\begin{cases}\frac{\pi z^2}{4} &,\text{ if } 0<z<1\\\\ \sqrt{z^2-1}+\int_{\sqrt{z^2-1}}^1 \sqrt{z^2-x^2}\,\mathrm{d}x &,\text{ if }1< z<\sqrt 2 \end{cases}
\end{align}
, as I had previously found.
| That the pdf is correct can be checked by a simple simulation
samps=sqrt(runif(1e5)^2+runif(1e5)^2)
hist(samps,prob=TRUE,nclass=143,col="wheat")
df=function(x){pi*x/2-2*x*(x>1)*acos(1/(x+(1-x)*(x<1)))}
curve(df,add=TRUE,col="sienna",lwd=3)
Finding the cdf without the polar change of variables goes through
\begin{align*}
\mathrm{Pr}(\sqrt{X^2+Y^2}\le z) &= \mathrm{Pr}(X^2+Y^2\le z^2)\\
&= \mathrm{Pr}(Y^2\le z^2-X^2)\\
&=\mathrm{Pr}(Y\le \sqrt{z^2-X^2}\,,X\le z)\\
&=\mathbb{E}^X[\sqrt{z^2-X^2}\mathbb{I}_{[0,\min(1,z)]}(X)]\\
&=\int_0^{\min(1,z)} \sqrt{z^2-x^2}\,\text{d}x\\
&=z^2\int_0^{\min(1,z^{-1})} \sqrt{1-y^2}\,\text{d}y\qquad [x=yz\,,\ \text{d}x=z\text{d}y]\\
&=z^2\int_0^{\min(\pi/2,\cos^{-1} z^{-1})} \sin^2{\theta} \,\text{d}\theta\qquad [y=\cos(\theta)\,,\ \text{d}y=\sin(\theta)\text{d}\theta]\\
&=\frac{z^2}{2}\left[ \min(\pi/2,\cos^{-1} z^{-1}) - \sin\{\min(\pi/2,\cos^{-1} z^{-1})\}\cos\{\min(\pi/2,\cos^{-1} z^{-1}\}\right]\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sin\{\cos^{-1} z^{-1})\}z^{-1}&\text{ if }z\ge 1\\
\end{cases}\\
&=\frac{z^2}{2}\begin{cases}
\pi/2 &\text{ if }z<1\\
\cos^{-1} z^{-1}-\sqrt{1-z^{-2}}z^{-1}&\text{ if }z\ge 1\\
\end{cases}
\end{align*}
which ends up with the same complexity! (Plus potential mistakes of mine along the way!)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/323617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
} |
Probability that the absolute value of a normal distribution is greater than another Greatly appreciate anyone that is willing to Help. I am thinking about the question of comparing the absolute value of normal distributions. Given $a > b$, $X$ ~ $N(0,a)$ and $Y$ ~ $N(0,b)$, what is the distribution of $|X| - |Y|$?
| It may be shown that $|X| \sim HN(a)$ and $|Y| \sim HN(b)$, where $HN(\cdot)$ represents a half-normal distribution. For completeness, the probability density functions of $|X|$ and $|Y|$ are
\begin{eqnarray*}
f_{|X|} (x) &=& \frac{\sqrt{2}}{a\sqrt{\pi}} \exp \left(-\frac{x^2}{2a^2}\right), \quad x>0 \\
f_{|Y|} (y) &=& \frac{\sqrt{2}}{b\sqrt{\pi}} \exp \left(-\frac{y^2}{2b^2}\right), \quad y>0.
\end{eqnarray*}
It is also useful to note that if $W \sim HN(\sigma)$, then the moment generating function of $W$ is
\begin{eqnarray*}
\mbox{E} \left[\exp \left(tW\right) \right] = 2 \exp \left(\frac{\sigma^2 t^2}{2}\right) \Phi \left(\sigma t\right),
\end{eqnarray*}
where $\Phi (\cdot)$ denotes the CDF of the standard normal distribution.
We wish to find the distribution of $Z = |X| - |Y|$. By definition, the CDF of $Z$ of is defined as
\begin{eqnarray*}
F_{Z} (z) = \int_{0}^{\infty}\int_{0}^{\infty} f_{|X|} (x) f_{|Y|} (y) \mbox{I} \left[x-y \le z\right] \mbox{d}x \mbox{d}y,
\end{eqnarray*}
where $\mbox{I} \left[\cdot\right]$ denotes the indicator function. Note that the double integral takes place in the first quadrant and the indicator function specifies all points above the line $y = x-z$. Now if $z \ge 0$, this line will intersect the $x$-axis, otherwise it will intersect the $y$-axis. Now ordering the integration appropriately will greatly simplify the double integral. See the following plots where the dark black line denotes $y=x-z$ and the red lines denote the direction and bounds of integration (albeit not extended indefinitely).
Therefore, we can write the CDF of $Z$ as
\begin{eqnarray*}
F_{Z} (z) = \mbox{I} \left[z \ge 0 \right]\int_{0}^{\infty}\int_{0}^{y+z} f_{|X|} (x) f_{|Y|} (y) \mbox{d}x \mbox{d}y + \mbox{I} \left[z \lt 0 \right]\int_{0}^{\infty}\int_{x-z}^{\infty} f_{|X|} (x) f_{|Y|} (y) \mbox{d}y \mbox{d}x.
\end{eqnarray*}
In the first double integral, consider the change of variables from $(x,y)$ to $(v,y)$, where $v=x-y$. The Jacobian of this transformation is $1$. The transformation is useful since it removes the occurrence of $y$ in the bound of integration when differentiating with respect to $v$. A similar transformation was defined for the second double integral. Plugging these values in, we obtain
\begin{eqnarray*}
F_{Z} (z) = \mbox{I} \left[z \ge 0 \right]\int_{0}^{\infty} f_{|Y|} (y) \int_{-\infty}^{z} f_{|X|} (v+y) \mbox{d}v \mbox{d}y + \mbox{I} \left[z \lt 0 \right]\int_{0}^{\infty} f_{|X|} (x) \int_{-\infty}^{z} f_{|Y|} (x-v) \mbox{d}v \mbox{d}x.
\end{eqnarray*}
By Leibniz's integral rule (or the fundamental theorem of calculus), the PDF of $Z$ is
\begin{eqnarray*}
f_{Z} (z) &=& \mbox{I} \left[z \ge 0 \right]\int_{0}^{\infty} f_{|Y|} (y) f_{|X|} (z+y) \mbox{d}y + \mbox{I} \left[z \lt 0 \right]\int_{0}^{\infty} f_{|X|} (x) f_{|Y|} (x-z) \mbox{d}x \\
&=& \mbox{I} \left[z \ge 0 \right]\int_{0}^{\infty} f_{|Y|} (y) f_{|X|} (y+|z|) \mbox{d}y + \mbox{I} \left[z \lt 0 \right]\int_{0}^{\infty} f_{|X|} (x) f_{|Y|} (x+|z|) \mbox{d}x.
\end{eqnarray*}
These integrals may be solved quite simply by making use of the moment generating function result. I shall only solve the first one.
\begin{eqnarray*}
\int_{0}^{\infty} f_{|Y|} (y) f_{|X|} (y+|z|) \mbox{d}y &=& \frac{2}{ab \pi}\exp \left(-\frac{z^2}{2a^2} \right) \int_{0}^{\infty} \exp \left(-\frac{|z|}{a^2} y \right) \exp \left(-\frac{y^2}{2} \left[\frac{1}{a^2}+\frac{1}{b^2}\right] \right) \mbox{d}y.
\end{eqnarray*}
Now the second term within the integral is proportional to a $HN(\sigma)$ PDF with $\sigma^2 = \frac{a^2b^2}{a^2+b^2}$ and the first term is of the form of the MGF with $t = - \frac{|z|}{a^2}$. Hence, multiplying and dividing by the proportionality constant, $\frac{\sqrt{2}\sqrt{a^2+b^2}}{ab\sqrt{\pi}}$, it may be shown that the above reduces to
\begin{eqnarray*}
2 \sqrt{\frac{2}{\pi}} (a^2+b^2)^{(-.5)} \exp \left(-\frac{z^2}{2(a^2+b^2)} \right) \Phi \left(-\frac{b}{a} \frac{|z|}{\sqrt{a^2+b^2}}\right).
\end{eqnarray*}
Making use of the standard normal PDF $ \phi(\cdot)$, the above can be written as
\begin{eqnarray*}
\frac{4}{\sqrt{a^2+b^2}} \phi\left(\frac{z}{\sqrt{a^2+b^2}}\right)\Phi \left(-\frac{b}{a} \frac{|z|}{\sqrt{a^2+b^2}}\right).
\end{eqnarray*}
Solving for the other portion of the PDF of $Z$, one will result in the equation
\begin{eqnarray*}
f_Z (z) = \begin{cases}
\frac{4}{\sqrt{a^2+b^2}} \phi\left(\frac{z}{\sqrt{a^2+b^2}}\right)\Phi \left(-\frac{b}{a} \frac{|z|}{\sqrt{a^2+b^2}}\right), & \mbox{for } z \ge 0 \\
\frac{4}{\sqrt{a^2+b^2}} \phi\left(\frac{z}{\sqrt{a^2+b^2}}\right)\Phi \left(-\frac{a}{b} \frac{|z|}{\sqrt{a^2+b^2}}\right), & \mbox{for } z \lt 0
\end{cases}.
\end{eqnarray*}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/560633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to derive the Jensen-Shannon divergence from the f-divergence? The Jensen-Shannon divergence is defined as $$JS(p, q) = \frac{1}{2}\left(KL\left(p||\frac{p+q}{2}\right) + KL\left(q||\frac{p+q}{2}\right) \right).$$ In Wikipedia it says that it can be derived from the f-divergence $$D_f(p||q) = \int_{-\infty}^{\infty} q(x) \cdot f\left(\frac{p(x)}{q(x)}\right) dx$$ with $f(x) = -(1+x)\cdot \text{log}(\frac{1+x}{2}) + x \cdot \text{log} (x)$. However, when I try it I end up with $JS(p, q) = 2 \cdot D_f(p||q)$.
\begin{align*}
JS(p, q) &= \frac{1}{2}\left( KL \left( p|| \frac{p+q}{2} \right) + KL \left( q|| \frac{p+q}{2} \right) \right)\\
&= \frac{1}{2} \int_{-\infty}^{\infty} p(x) \cdot \text{log} \left( \frac{2 \cdot p(x)}{p(x)+q(x)} \right) + q(x) \cdot \text{log} \left( \frac{2 \cdot q(x)}{p(x)+q(x)} \right) dx\\
&= \frac{1}{2} \int_{-\infty}^{\infty} p(x) \cdot \left( \text{log} \left( 2 \right) + \text{log} \left( p(x) \right) - \text{log} \left( p(x)+q(x) \right) \right) + q(x) \cdot \left( \text{log} \left( 2 \right) + \text{log} \left( q(x) \right) \right) - \text{log} \left( p(x)+q(x) \right) dx\\
&= \frac{1}{2} \int_{-\infty}^{\infty} (p(x)+q(x)) \cdot \text{log}\left(2 \right) - (p(x)+q(x)) \cdot \text{log}\left(p(x)+ q(x) \right) + p(x) \cdot \text{log} \left(p(x)\right) + q(x) \cdot \text{log}\left(q(x) \right) dx \\
&= \frac{1}{2} \int_{-\infty}^{\infty} p(x) \cdot \text{log} \left(p(x)\right) + q(x) \cdot \text{log}\left(q(x) \right) - (p(x)+q(x)) \cdot \text{log}\left(\frac{p(x)+ q(x)}{2} \right) dx
\end{align*}
and
\begin{align*}
D_f(p||q) &= \int_{-\infty}^{\infty} q(x) \cdot f\left(\frac{p(x)}{q(x)}\right) dx \\
&= \int_{-\infty}^{\infty} q(x) \cdot \left(-(1+\frac{p(x)}{q(x)}) \cdot \text{log}\left(\frac{1+\frac{p(x)}{q(x)}}{2} \right) + \frac{p(x)}{q(x)} \cdot \text{log}\left(\frac{p(x)}{q(x)}\right) \right) dx \\
&= \int_{-\infty}^{\infty} -(q(x)+p(x)) \cdot \text{log}\left(\frac{1+\frac{p(x)}{q(x)}}{2} \right) + p(x) \cdot \text{log}\left(\frac{p(x)}{q(x)}\right) dx \\
&= \int_{-\infty}^{\infty} p(x) \cdot \text{log}\left(p(x) \right) - p(x) \cdot \text{log}\left(q(x)\right) - p(x) \cdot \text{log}\left(1+\frac{p(x)}{q(x)} \right) + p(x) \cdot \text{log}\left(2 \right) - q(x) \cdot \text{log}\left(1+\frac{p(x)}{q(x)} \right) + q(x) \cdot \text{log}\left(2 \right) dx \\
&= \int_{-\infty}^{\infty} p(x) \cdot \text{log}\left(p(x) \right) - p(x) \cdot \text{log}\left(q(x)\right) - p(x) \cdot \text{log}\left(\frac{p(x)+q(x)}{q(x)} \right) + p(x) \cdot \text{log}\left(2 \right) - q(x) \cdot \text{log}\left(\frac{p(x)+q(x)}{q(x)} \right) + q(x) \cdot \text{log}\left(2 \right) dx \\
&= \int_{-\infty}^{\infty} p(x) \cdot \text{log}\left(p(x) \right) - p(x) \cdot \text{log}\left(q(x)\right) - p(x) \cdot \text{log}\left(p(x)+q(x) \right) + p(x) \cdot \text{log}\left(q(x) \right) + p(x) \cdot \text{log}\left(2 \right) - q(x) \cdot \text{log}\left(p(x)+q(x) \right) + q(x)\cdot \text{log}\left(q(x) \right) + q(x) \cdot \text{log}\left(2 \right) dx \\
&= \int_{-\infty}^{\infty} p(x) \cdot \text{log}\left(p(x) \right) - p(x) \cdot \text{log}\left(p(x)+q(x) \right) + p(x) \cdot \text{log}\left(2 \right) - q(x) \cdot \text{log}\left(p(x)+q(x) \right) + q(x)\cdot \text{log}\left(q(x) \right) + q(x) \cdot \text{log}\left(2 \right) dx \\
&= \int_{-\infty}^{\infty} p(x) \cdot \text{log} \left(p(x)\right) + q(x) \cdot \text{log}\left(q(x) \right) - (p(x)+q(x)) \cdot \text{log}\left(p(x)+ q(x)\right) + (p(x)+q(x)) \cdot \text{log}\left(2\right) dx \\
&= \frac{2}{2} \int_{-\infty}^{\infty} p(x) \cdot \text{log} \left(p(x)\right) + q(x) \cdot \text{log}\left(q(x) \right) - (p(x)+q(x)) \cdot \text{log}\left(\frac{p(x)+ q(x)}{2} \right) dx \\
&= 2 \cdot JS(p, q)
\end{align*}
Where is my mistake?
| Few observations:
$\rm [I] ~(p. 90)$ defines Jensen-Shannon divergence for $P, Q, ~P\ll Q$ as
$$\mathrm{JS}(P,~Q) := D\left(P\bigg \Vert \frac{P+Q}{2}\right)+D\left(Q\bigg \Vert \frac{P+Q}{2}\right)\tag 1\label 1$$
and the associated function to generate $\rm JS(\cdot,\cdot) $ from $D_f(\cdot\Vert\cdot) $ is
$$f(x) :=x\log\frac{2x}{x+1}+\log\frac{2}{x+1}. \tag 2$$
The definition of Jensen-Shannon in $\eqref{1}$ lacks the constant $1/2,$ however in the original paper $\rm [II]~ (sec. IV, ~ p. 147)$ it wasn't defined so.
In $\rm [III], $ the authors noted the corresponding function $g(t), ~t:=p_i(x) /q_j(x) $ as
$$ g(t) := \frac12\left(t\log\frac{2t}{t+1}+\log\frac{2}{t+1}\right).\tag 3\label 3$$
Also in $\rm [IV] ~(p. 4)$ the author mentioned the required function to be $\frac12\left(u\log u -(u+1)\log\left(\frac{u+1}{2}\right)\right) $ which is equivalent to $\eqref 3.$
In $\rm [II] ~(p. 147),$ the author noted that the $K$ divergence, defined in terms of the Kullback $I$ (in author's terminology)
$$K(p_1, p_2) := I\left(p_1,\frac12(p_1+p_2)\right),\tag 4$$
coincides with the $f$ divergence for $$x\mapsto x\log\frac{2x}{1+x}.\tag a\label a$$
The symmetrised version of $K$ is
$$L(p_1, p_2) := K(p_1, p_2)+K( p_2, p_1). \tag 5$$
As the author subsequently defined $\rm JS_{\pi}(\cdot,\cdot);$ for $\pi=\frac12, $
$$\mathrm{JS}_\frac{1}{2}(p_1,p_2)=\frac12 L(p_1,p_2).\tag 6\label 6$$
Now, using $\eqref{a}, $
\begin{align}\frac12 L(p, q) &=\frac12[K(p,q)+K(q,p)]\\ &=\frac12\left[\int q~\frac{p}{q}\log\frac{\frac{2p}{q}}{1+\frac{p}{q}}~\mathrm d\mu+ \int p~\frac{q}{p}\log\frac{\frac{2q}{p}}{1+\frac{q}{p}}~\mathrm d\mu\right]\\ &= \frac12\left[\int p\log\frac{2p}{q+p}~\mathrm d\mu+ \int q\log\frac{2q}{p+q}~\mathrm d\mu\right]\\&= \frac{1}{2}\left[\int q~\frac{p}{q}\log\frac{2\frac{p}{q}}{\frac{p}{q}+1}~\mathrm d\mu + \int q \log\frac{2}{1+\frac{p}{q}}~\mathrm d\mu\right]\tag 7\label 7\\&=\int q~f_{\rm{JS}}\left(\frac{p}{q}\right)~\mathrm d\mu,\end{align}
where, from $\eqref{7}, $
$$f_{\rm{JS}} (x) :=\frac12\left[x\log\frac{2x}{1+x}+ \log\frac{2}{1+x}\right] .\tag 8 $$
References:
$\rm [I]$ Information Theory:
From Coding to Learning, Yury Polyanskiy, Yihong Wu, Cambridge University Press.
$\rm [II]$ Divergence Measures Based on the Shannon Entropy, Jianhua Lin, IEEE, Vol. $37$, No. $\rm I,$ January $1991.$
$\rm [III]$ $f$-Divergence is a Generalized Invariant Measure Between Distributions, Yu Qiao, Nobuaki Minematsu, DOI:$10.21437/\rm Interspeech.2008-393.$
$\rm [IV]$ On a generalization of the Jensen-Shannon divergence and the
JS-symmetrization of distances relying on abstract means, Frank Nielsen, May $2019, $ DOI:$10.3390/\rm e21050485.$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/593928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find mean and variance using characteristic function Consider a random variable with characteristic function
$$
\phi(t)=\frac{3\sin(t)}{t^3}-\frac{3\cos(t)}{t^2}, \ \text{when} \ t \neq0
$$
How can I compute the $E(X)$ and $Var(X)$ by using this characteristic function? I'm stuck because if I differentiate I got $\phi'(t)=\frac{3t^2\sin(t)+9t\cos(t)-9\sin(t)}{t^4}$ which is undefined at $t=0$.
Do I need to use Taylor expansion to approximate sin and cos ?
| In general, if the power-series expansion holds for a characteristic function of random variable $X$, which is the case of this $\varphi(t)$ (because power-series expansions for $\sin t$ and $\cos t$ hold and the negative exponent terms cancelled each other, as will be shown in the derivation below), the moments of $X$ can be read off from it (for a rigorous proof, see Probability and Measure by Patrick Billingsley, pp. 342 -- 345):
\begin{align}
\varphi^{(k)}(0) = i^kE[X^k]. \tag{1}
\end{align}
$(1)$ is analogous to the relationship $E[X^k] = m^{(k)}(0)$ between moments and the moment generating function, which is perhaps more familiar to statisticians.
Therefore, to determine $E[X]$ and $\operatorname{Var}(X)$, it is sufficient to evaluate $\varphi'(0)$ and $\varphi''(0)$, to which you can apply L'Hopital's rule for multiple times (which is more verbose). First, because $\sin t - t\cos t \to 0$ as $t \to 0$,
\begin{align}
\lim_{t \to 0}\varphi(t) = 3\lim_{t \to 0}\frac{\sin t - t\cos t}{t^3}
= 3\lim_{t \to 0}\frac{\cos t - (\cos t - t\sin t)}{3t^2}
= \lim_{t \to 0} \frac{\sin t}{t} = 1,
\end{align}
which lends us the legitimacy of using L'Hopital rule for evaluating
\begin{align}
\varphi'(0) &= \lim_{t \to 0}\frac{\varphi(t) - \varphi(0)}{t} \\
&= \lim_{t \to 0} \varphi'(t) \\
&= 3\lim_{t \to 0}\frac{t^4\sin t - 3t^2(\sin t - t\cos t)}{t^6} \\
&= 3\lim_{t \to 0}\frac{t^2\sin t - 3\sin t + 3t\cos t}{t^4} \\
&= 3\lim_{t \to 0}\frac{2t\sin t + t^2\cos t - 3\cos t + 3\cos t - 3t\sin t}{4t^3} \\
&= \frac{3}{4}\lim_{t \to 0}\frac{t\cos t - \sin t}{t^2} \\
&= \frac{3}{4}\lim_{t \to 0}\frac{\cos t - t\sin t - \cos t}{2t} = 0.
\end{align}
I will leave the task of getting $\varphi''(0)$ in this way back to you.
Alternatively, a direct power-series expansion method (which I highly recommend as the first option for general limit evaluation tasks) has been mentioned in whuber's answer and my previous comments. In detail, it follows by (see, for example, Eq. (23) and Eq. (6) in this link)
\begin{align}
& \sin t = t - \frac{1}{3!}t^3 + \frac{1}{5!}t^5 + O(t^7), \\
& \cos t = 1 - \frac{1}{2}t^2 + \frac{1}{4!}t^4 + O(t^6)
\end{align}
that
\begin{align}
& \varphi(t) = 3t^{-3}\sin t - 3t^{-2}\cos t \\
=& 3t^{-3}\left(t - \frac{1}{3!}t^3 + \frac{1}{5!}t^5 + O(t^7)\right)
- 3t^{-2}\left(1 - \frac{1}{2}t^2 + \frac{1}{4!}t^4 + O(t^6)\right) \\
=& 1 + \frac{1}{40}t^2 - \frac{1}{8}t^2 + O(t^4) \\
=& 1 - \frac{1}{10}t^2 + O(t^4).
\end{align}
From which it is immediate to conclude
\begin{align}
\varphi(0) = 1, \; \varphi'(0) = 0, \; \varphi''(0) = -\frac{1}{5}.
\end{align}
It then follows by $(1)$ that
\begin{align}
E[X] = -i\varphi'(0) = 0, \; E[X^2] = -\varphi''(0) = \frac{1}{5}.
\end{align}
Therefore, $E[X] = 0, \operatorname{Var}(X) = \frac{1}{5}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/605455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Probability of reaching node A from node B in exactly X steps I have a three-node matrix with two edges (A-B and A-C). I would like to determine what the probability is starting from B and ending at C in exactly 100 steps.
I have only written out probabilities:
P(A|B) = 1
P(B|A) = 0.5
P(A|C) = 1
P(C|A) = 0.5
But there are so many combinations of ways to get from B to C in exactly 100 steps using these probabilities. Any suggestions on how to continue this problem?
| After an odd number of steps you must be at A and after an even number of steps you will be in either B or C, each with probability 0.5, therefore after 100 steps the probability of being in C is 0.5
Edit
More formally we can define a Markov chain with transition matrix:
$$ T = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}\\1&0&0\\1&0&0}\right) $$
Now we can compute $T^2$ and $T^3$ to show that for $n\ge 1$, $T^{2n-1}=T$:
$$ T^2 = \left(\array{1&0&0\\0&\tfrac{1}{2}&\tfrac{1}{2}\\0&\tfrac{1}{2}&\tfrac{1}{2}}\right) $$
$$ T^3 = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}\\1&0&0\\1&0&0}\right) = T $$
Therefore we calculate that $T^{100}=T^2$ and that $x_0 T^{100} = x_0 T^2$
$$ x_0 T^2 = \left(\array{0&1&0}\right) \left(\array{1&0&0\\0&\tfrac{1}{2}&\tfrac{1}{2}\\0&\tfrac{1}{2}&\tfrac{1}{2}}\right) = \left(\array{0&\tfrac{1}{2}&\tfrac{1}{2}}\right) $$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/87763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Distribution of $\sqrt{X^2+Y^2}$ when $X,Y$ are independent $U(0,1)$ variables
As a routine exercise, I am trying to find the distribution of $\sqrt{X^2+Y^2}$ where $X$ and $Y$ are independent $ U(0,1)$ random variables.
The joint density of $(X,Y)$ is $$f_{X,Y}(x,y)=\mathbf 1_{0<x,y<1}$$
Transforming to polar coordinates $(X,Y)\to(Z,\Theta)$ such that $$X=Z\cos\Theta\qquad\text{ and }\qquad Y=Z\sin\Theta$$
So, $z=\sqrt{x^2+y^2}$ and $0< x,y<1\implies0< z<\sqrt 2$.
When $0< z<1$, we have $0< \cos\theta<1,\,0<\sin\theta<1$ so that $0<\theta<\frac{\pi}{2}$.
When $1< z<\sqrt 2$, we have $z\cos\theta<\implies\theta>\cos^{-1}\left(\frac{1}{z}\right)$, as $\cos\theta$ is decreasing on $\theta\in\left[0,\frac{\pi}{2}\right]$; and $z\sin\theta<1\implies\theta<\sin^{-1}\left(\frac{1}{z}\right)$, as $\sin\theta$ is increasing on $\theta\in\left[0,\frac{\pi}{2}\right]$.
So, for $1< z<\sqrt 2$, we have $\cos^{-1}\left(\frac{1}{z}\right)<\theta<\sin^{-1}\left(\frac{1}{z}\right)$.
The absolute value of jacobian of transformation is $$|J|=z$$
Thus the joint density of $(Z,\Theta)$ is given by
$$f_{Z,\Theta}(z,\theta)=z\mathbf 1_{\{z\in(0,1),\,\theta\in\left(0,\pi/2\right)\}\bigcup\{z\in(1,\sqrt2),\,\theta\in\left(\cos^{-1}\left(1/z\right),\sin^{-1}\left(1/z\right)\right)\}}$$
Integrating out $\theta$, we obtain the pdf of $Z$ as
$$f_Z(z)=\frac{\pi z}{2}\mathbf 1_{0<z<1}+\left(\frac{\pi z}{2}-2z\cos^{-1}\left(\frac{1}{z}\right)\right)\mathbf 1_{1<z<\sqrt 2}$$
Is my reasoning above correct? In any case, I would like to avoid this method and instead try to find the cdf of $Z$ directly. But I couldn't find the desired areas while evaluating $\mathrm{Pr}(Y\le \sqrt{z^2-X^2})$ geometrically.
EDIT.
I tried finding the distribution function of $Z$ as
\begin{align}
F_Z(z)&=\Pr(Z\le z)
\\&=\Pr(X^2+Y^2\le z^2)
\\&=\iint_{x^2+y^2\le z^2}\mathbf1_{0<x,y<1}\,\mathrm{d}x\,\mathrm{d}y
\end{align}
Mathematica says this should reduce to
$$F_Z(z)=\begin{cases}0 &,\text{ if }z<0\\ \frac{\pi z^2}{4} &,\text{ if } 0< z<1\\ \sqrt{z^2-1}+\frac{z^2}{2}\left(\sin^{-1}\left(\frac{1}{z}\right)-\sin^{-1}\left(\frac{\sqrt{z^2-1}}{z}\right)\right) &,\text{ if }1< z<\sqrt 2\\ 1 &,\text{ if }z>\sqrt 2 \end{cases}$$
which looks like the correct expression. Differentiating $F_Z$ for the case $1< z<\sqrt 2$ though brings up an expression which doesn't readily simplify to the pdf I already obtained.
Finally, I think I have the correct pictures for the CDF:
For $0<z<1$ :
And for $1<z<\sqrt 2$ :
Shaded portions are supposed to indicate the area of the region $$\left\{(x,y):0<x,y< 1\,,\,x^2+y^2\le z^2\right\}$$
The picture immediately yields
\begin{align}
F_Z(z)&=\Pr\left(-\sqrt{z^2-X^2}\le Y\le\sqrt{z^2-X^2}\right)
\\&=\begin{cases}\frac{\pi z^2}{4} &,\text{ if } 0<z<1\\\\ \sqrt{z^2-1}+\int_{\sqrt{z^2-1}}^1 \sqrt{z^2-x^2}\,\mathrm{d}x &,\text{ if }1< z<\sqrt 2 \end{cases}
\end{align}
, as I had previously found.
| For $0 \leq z \leq 1$, $P\left(\sqrt{X^2+Y^2} \leq z\right)$ is just the area of the quarter-circle of radius $z$ which is $\frac 14 \pi z^2$. That is,
$$\text{For }0 \leq z \leq 1, ~\text{area of quarter-circle} = \frac{\pi z^2}{4} = P\left(\sqrt{X^2+Y^2} \leq z\right).$$
For $1 < z \leq \sqrt{2}$, the region over which we need to integrate to find $P\left(\sqrt{X^2+Y^2} \leq z\right)$can be divided into two right triangles $\big($one of them has vertices $(0,0), (0,1)$ and $(\sqrt{z^2-1}, 1)$ while the other has vertices $(0,0), (1,0)$ and $(1, \sqrt{z^2-1})$ $\big)$ together with a sector of a circle of radius $z$ and included angle $\frac{\pi}{2}-2\arccos\left(\frac{1}{z}\right)$. The area of this region (and hence the value of $\left( P(\sqrt{X^2+Y^2} \leq z\right)$) is easily found. We have that for $1 < z \leq \sqrt{2}$,
\begin{align}\text{area of region} &= \text{area of two triangles plus area of sector}\\
&=\sqrt{z^2-1} + \frac 12 z^2\left( \frac{\pi}{2}-2\arccos \left(\frac{1}{z}\right)\right)\\
&= \frac{\pi z^2}{4} + \sqrt{z^2-1} - z^2\arccos \frac{1}{z}\\
&= \left( P(\sqrt{X^2+Y^2} \leq z\right)\end{align}
which is the result in Martijn Wetering's answer.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/323617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 2
} |
Finding the conditional distribution of single sample point given sample mean for $N(\mu, 1)$
Suppose that $X_1, \ldots, X_n$ are iid from $N(\mu, 1)$. Find the conditional distribution of $X_1$ given $\bar{X}_n = \frac{1}{n}\sum^n_{i=1} X_i$.
So I know that $\bar{X}_n$ is a sufficient statistic for $\mu$ and $X_1$ is an unbiased estimator of $\mu$, but I don't know how to proceed from here?
| Firstly, we need to find the joint distribution of $(X_1, \bar{X})$ (for simplicity, write $\bar{X}$ for $\bar{X}_n$). It is easily seen that
\begin{equation}
\begin{bmatrix}
X_1 \\
\bar{X}
\end{bmatrix}
= \begin{bmatrix}
1 & 0 & \cdots & 0 \\
\frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n}
\end{bmatrix}
\begin{bmatrix}
X_1 \\ X_2 \\ \vdots \\ X_n
\end{bmatrix}\tag{1}
\end{equation}
In view of $(1)$, $(X_1, \bar{X})$ has jointly normal distribution, with the mean vector
\begin{align}
\mu_0 = \begin{bmatrix}
1 & 0 & \cdots & 0 \\
\frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n}
\end{bmatrix}
\begin{bmatrix}
\mu \\ \mu \\ \vdots \\ \mu
\end{bmatrix}
= \begin{bmatrix}
\mu \\ \mu
\end{bmatrix},
\end{align}
and the covariance matrix
\begin{align}
\Sigma = \begin{bmatrix}
1 & 0 & \cdots & 0 \\
\frac{1}{n} & \frac{1}{n} & \cdots & \frac{1}{n}
\end{bmatrix}
I
\begin{bmatrix}
1 & \frac{1}{n} \\
0 & \frac{1}{n} \\
\vdots & \vdots \\
0 & \frac{1}{n}
\end{bmatrix}
= \begin{bmatrix}
1 & \frac{1}{n} \\
\frac{1}{n} & \frac{1}{n}
\end{bmatrix}.
\end{align}
Now according to conditional distributions of a MVN distribution:
$X_1 | \bar{X} \sim N(\mu_1, \sigma_1^2),$
where
\begin{align}
& \mu_1 = \mu + \frac{1}{n}\times n \times (\bar{X} - \mu) = \bar{X}, \\
& \sigma_1^2 = 1 - \frac{1}{n}\times n \times\frac{1}{n} = 1 - \frac{1}{n}.
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/434427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Trying to approximate $E[f(X)]$ - Woflram Alpha gives $E[f(X)] \approx \frac{1}{\sqrt{3}}$ but I get $E[f(X)] \approx 0$? Let $X \sim \mathcal{N}(\mu_X,\sigma_X^2) = \mathcal{N}(0,1)$. Let $f(x) = e^{-x^2}$. I want to approximate $E[f(X)]$.
Wolfram Alpha gives
\begin{align}
E[f(X)] \approx \frac{1}{\sqrt{3}}.
\end{align}
Using a Taylor expansion approach, and noting that $f''(0) = -2$, I get
\begin{align}
E[f(X)] &\approx f(\mu_X) + \frac{f''(\mu_X)}{2} \sigma_X^2 \\
& = f(0) + \frac{f''(0)}{2} \\
& = 1 + \frac{-2}{2} \\
& = 0.
\end{align}
Why does my approximation fail to match the Wolfram Alpha result? What can be done to fix it?
| There's no need to "approximate" when you can derive the exact value of $\mathbb{E}[f(X)]$ . Let us apply the Law of the Unconscious Statistician (LoTUS) to obtain :
\begin{align*}
\mathbb{E}[f(X)] &= \int_{-\infty}^{+\infty} e^{-x^2} \cdot \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{x^2}{2}\right)~dx\\
&= 2\int_0^{+\infty} \frac{1}{\sqrt{2\pi}} \exp\left(-\frac{3x^2}{2}\right)~dx\\
&= 2\int_0^{+\infty} \frac{1}{\sqrt{2\pi}} \cdot\frac{1}{\sqrt{6z}} e^{-z}~dz\\
&= \frac{2}{2\sqrt{3\pi}} \int_0^{+\infty} e^{-z} z^{\frac{1}{2} -1} ~dz\\
&=\frac{1}{\sqrt{3\pi}}\cdot \Gamma\left(\frac{1}{2}\right)\\
&= \frac{1}{\sqrt{3\pi}}\cdot \sqrt{\pi}\\
&= \frac{1}{\sqrt{3}}
\end{align*}
Hope this helps. :)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/495042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
} |
Trouble finding var(ax) So the variance of a 6-sided (1,2,3,4,5,6) die is $291.6$ using the formula:
$$
\text{Var}(X) = \frac{(b-a+1)^2}{12}
$$
Also, $\text{Var}(10X) = 10^2 \cdot \text{Var}(X)$, so that would mean $\text{Var}(10X) = 291.6$.
If I want to find the variance of $10X$, is this not the same as multiplying each of my values by $10$? So that I'm now finding the variance of $(10,20,30,40,50,60)$? If I use the same formula as above, my answer is $216.67$, which is not equal to $291.6$ which I expected. Not exactly sure which part of my logic is incorrect.
| Let's find a formula that will apply to both your situations.
One description that covers them both supposes $X$ is a uniform random variable defined on an arithmetic progression
$$x_1, x_2, \ldots, x_n = a, a+d, a+2d,\ldots, a+(n-1)d = b.$$
Thus $x_i=a+(i-1)d$ and each $x_i$ has a probability $1/n.$ By definition
$$E[X] = \sum_{i=1}^n \Pr(x_i)x_i = \sum_{i=1}^n \frac{1}{n}\, (a+(i-1)d) = \frac{2a+(n-1)d}{2} = \frac{a+b}{2}.$$
Then, also by definition,
$$\begin{aligned}
\operatorname{Var}(X) &= E[(X-E[X])^2] = \sum_{i=1}^n \Pr(x_i)(x_i-E[X])^2 \\
&= \sum_{i=1}^n \frac{1}{n}\, \left(a + (i-1)d - \frac{2a+(n-1)d}{2}\right)^2\\
&=d^2\frac{n^2-1}{12}.
\end{aligned}$$
The factor of $d^2$ is precisely what you expected from the scaling law for the variance. Let's check.
*
*The standard die is described by $a=1,$ $d=1,$ and $n=6,$ so that $$d^2\frac{n^2-1}{12} = (1)^2 \frac{6^2-1}{12} = \frac{35}{12} = 2.91\bar6.$$
*Upon multiplying by $10$ we have $a=10,$ $d=10,$ and $n=6$ still, so that $$d^2\frac{n^2-1}{12} = (10)^2 \frac{6^2-1}{12} = 100 \frac{35}{12} = 291.\bar6.$$
When you obtained $216.67,$ you were applying the formula $((b-a+1)^2 - 1)/12$ (notice the additional "-1" in the numerator). But in terms of $a,$ $n,$ and $d,$ this is
$$\frac{(b-a+1)^2-1}{12} = \frac{(a+(n-1)d - a + 1)^2-1}{12} = \frac{(d(n-1)+1)^2}{12}$$
which gives the correct value only when $d=0$ or $d=1.$ Your formula does not apply to any other situation. That's why we needed to work out the generalization.
Finally, if you would prefer a formula in terms of the two endpoints $a$ and $b$ and the count $n\gt 1,$ you can recover $d$ as
$$d = \frac{b-a}{n-1}$$
and plug that in to get
$$\operatorname{Var}(X) = \left(\frac{b-a}{n-1}\right)^2 \frac{n^2-1}{12} = \frac{(b-a)^2}{12}\,\frac{n^2-1}{(n-1)^2}.$$
This is informative, because for medium to large $n,$ the second fraction is close to $1$ (the error is on the order of $1/n$) and can be ignored. What is left is the variance of the variable uniformly distributed over all numbers between $a$ and $b.$ The quadratic dependence on the scale is explicit in the factor $(b-a)^2.$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/504425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Prove that $(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$ I have this equality
$$(A^{-1} + B^{-1})^{-1}=A(A+B)^{-1}B$$ where $A$ and $B$ are square symmetric matrices.
I have done many test of R and Matlab that show that this holds, however I do not know how to prove it.
| Note that
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$$
is the inverse of
$$\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $$
if and only if
$$ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) = \mathbf{I} $$
and
$$ \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} = \mathbf{I} $$
so that the left and right inverses coincide. Let's prove the first statement. We can see that
$$\begin{align} \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B} \left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) & = \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left(\mathbf{B} \mathbf{A}^{-1} + \mathbf{I} \right) \\ &= \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \left( \mathbf{A} + \mathbf{B} \right) \mathbf{A}^{-1} \\ & = \mathbf{I} \end{align} $$
as desired. A similar trick will prove the second statement as well. Thus $ \mathbf{A} \left(\mathbf{A} + \mathbf{B} \right)^{-1} \mathbf{B}$ is indeed the inverse of $\left(\mathbf{A}^{-1} + \mathbf{B}^{-1} \right) $.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/197067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
sample all pairs without repeats Assuming I have a very large number of $K$ colored balls and we know the fraction of each color. If we randomly sample all pairs so that all pairs have two balls of different colors then what is the fraction of pairs with a given color combinations?
For example if there are 3 colors with fractions, $f_{blue}=0.5$, $f_{red}=0.25$ and $f_{green}=0.25$ then if you sample all pairs of balls such that each pairs consists of two different colors. Then I would expect that $50\%$ of pairs will be $(blue/red)$, $50\%$ will be $(blue/green)$ and $0\%$ $(red/green)$ (if 50% is blue then there must be a blue in each pair). This scenario is easy since there is only one way to sample all unordered pairs.
If more than $50\%$ of balls have the same color there will be no solution and if $50\%$ of balls are a certain color there is only one way to sample all balls (unordered) as above.
If the fraction of 3 colors are then same $f_{blue}=1/3$, $f_{red}=1/3$ and $f_{green}=1/3$ then by symmetry I would expect the fraction of pairs to be $1/3$ $(blue/red)$, $1/3$ $(blue/green)$ and $1/3$ $(red/green)$ if they where randomly sampled.
Is there a general way to calculate the expected fraction of colored pairs given you know the fraction of each of the $K$ colors?
Edit/update
Ertxiem gave the solution in the case of K=3 where you do not use the assumption of randomly drawing all pairs (pairs of different colors without replacement).
Here is what I have tried so far.
Let $f=(f_1,f_2,\ldots,f_K)$ be the fraction of each colored ball assuming K colors.
For the case of K=3 then we can calculate the fraction of pairs of ball by solving the following
$Ax=f$ where $A= \left( \begin{array}{ccc}
0.5 & 0.5 & 0 \\
0.5 & 0 & 0.5 \\
0 & 0.5 &0.5
\end{array}
\right)
$, $f= \left( \begin{array}{c}
f_1\\
f_2 \\
f_3
\end{array}
\right)$, $x= \left( \begin{array}{c}
\pi_{12}\\
\pi_{13} \\
\pi_{23}
\end{array}\right)$ where $\pi_{ij}$is the probability of a pair of color $i$ and $j$.
This gives the solution for $K=3$.
For $K>3$ we cannot use the same approach because there will be multiple solutions for example for $K=4$. Solving $Ax=f$ where $A= \left( \begin{array}{cccccc}
0.5 & 0.5 & 0.5 & 0 & 0 & 0 \\
0.5 & 0 & 0 &0.5 &0.5 &0 \\
0 & 0.5 &0 & 0.5 & 0 & 0.5 \\
0 & 0 &0.5 & 0& 0.5& 0.5
\end{array}
\right)
$, $f= \left( \begin{array}{c}
f_1\\
f_2 \\
f_3 \\
f_4
\end{array}
\right)$ and $x= \left( \begin{array}{c}
\pi_{12}\\
\pi_{13} \\
\pi_{14} \\
\pi_{23} \\
\pi_{24} \\
\pi_{34}\end{array}\right)$ gives multiple multiple solutions. Is there a way to solve it by assuming the color combinations are independent given that they are different?
update with example and simulations
For the $K=4$ case then I have tried to solve $Ax=f$ for $x$ using Moore-Penrose generalized inverse (pseudoinverse using least squared solution) however, this does not give the same results as simulations (rejection sampling using $5e7$ balls). For the case of $f=(3/8,1/8,2/8,2/8)$ I get the following results
$$\begin{array}{c|c|c}
\hat{\pi} & pseudo inverse & simulations\\
\pi_{12} & 4/24 & 31/236\\
\pi_{13} & 7/24 & 73/236\\
\pi_{14} & 7/24 & 73/236\\
\pi_{23} & 1/24 & 14/236\\
\pi_{24} & 1/24 & 14/236\\
\pi_{34} & 4/24 & 31/236
\end{array}$$.
So I am still not able to find a analytical solution (for K>3).
| Suppose you have $K$ colours of balls with respective numbers $n_1,...,n_K$, with a total of $n = \sum n_i$ balls. Let $\mathscr{S}$ denote the set of all pairs of distinct balls and let $\mathscr{C}$ denote the set of all pairs of distinct balls of the same colour. Since $\mathscr{C} \subset \mathscr{S}$ the number of ways you can sample two balls of different colours is:
$$\begin{equation} \begin{aligned}
|\mathscr{S} - \mathscr{C}| = |\mathscr{S}| - |\mathscr{C}|
&= {n \choose 2} - \sum_{k=1}^K {n_k \choose 2} \\[6pt]
&= \frac{n(n-1)}{2} - \sum_{k=1}^K \frac{n_k (n_k-1)}{2} \\[6pt]
&= \frac{1}{2} \Big[ n(n-1) - \sum_{k=1}^K n_k (n_k-1) \Big] \\[6pt]
&= \frac{1}{2} \Big[ (n-1) \sum_{k=1}^K n_k - \sum_{k=1}^K n_k (n_k-1) \Big] \\[6pt]
&= \frac{1}{2} \sum_{k=1}^K n (n-n_k). \\[6pt]
\end{aligned} \end{equation}$$
Let $\mathscr{M}_{a,b}$ denote the set of all pairs of distinct balls with colours $a \neq b$. The number of ways you can sample two balls with a given (different) colour combination is:
$$|\mathscr{M}_{a,b}| = \frac{n_a n_b}{2}$$
Hence, the fraction of sample-pairs of different colours that are of the specified colour pair $a \neq b$ is:
$$P_n(a,b) = \frac{|\mathscr{M}_{a,b}|}{|\mathscr{S} - \mathscr{C}|} = \frac{n_a n_b}{\sum_{k=1}^K n_k (n-n_k)}.$$
Taking $n \rightarrow \infty$ and letting $p_1,...,p_K$ be the respective limiting sample proportions of the balls of each colour, you have:
$$P_\infty(a,b) = \lim_{n \rightarrow \infty} \frac{|\mathscr{M}_{a,b}|}{|\mathscr{S} - \mathscr{C}|} = \frac{p_a p_b}{\sum_{k=1}^K p_k (1-p_k)}.$$
Application to your problem: In your example you have $K=3$ colours with proportions $\mathbf{p} = (\tfrac{1}{2}, \tfrac{1}{4}, \tfrac{1}{4})$ for the respective colours $\text{Blue}, \text{Red}, \text{Green}$. This gives:
$$P_\infty(a,b)
= \frac{p_a p_b}{\tfrac{1}{2} \cdot \tfrac{1}{2} + \tfrac{1}{4} \cdot \tfrac{3}{4} + \tfrac{1}{4} \cdot \tfrac{3}{4}}
= \frac{p_a p_b}{5/8}
= \tfrac{8}{5} \cdot p_a p_b.$$
So you have:
$$\begin{equation} \begin{aligned}
P_\infty(\text{Blue}, \text{Red})
&= \tfrac{8}{5} \cdot \tfrac{1}{2} \cdot \tfrac{1}{4} = \tfrac{1}{5}, \\[6pt]
P_\infty(\text{Blue}, \text{Green})
&= \tfrac{8}{5} \cdot \tfrac{1}{2} \cdot \tfrac{1}{4} = \tfrac{1}{5}, \\[6pt]
P_\infty(\text{Red}, \text{Green})
&= \tfrac{8}{5} \cdot \tfrac{1}{4} \cdot \tfrac{1}{4} = \tfrac{1}{10}. \\[6pt]
\end{aligned} \end{equation}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/402971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Correct or not? Mixed Bayes' Rule - Noisy Communication In this problem, we study a simple noisy communication channel. Suppose that $X$ is a binary signal that takes value $−1$ and $1$ with equal probability. This signal $X$ is sent through a noisy communication channel, and the medium of transmission adds an independent noise term. More precisely, the received signal is $Y=X+N$ , where $N$ is standard normal, indpendendent of $X$ .
The decoder receives the value $y$ of $Y$, and decides whether $X$ was $1$ or $−1$, using the following decoding rule: it decides in favor of $1$ if and only if
$$P(X=1|Y=y)>2P(X=-1|Y=y)$$
It turns out that the decoding rule can be expressed in the form: decide in favor of $1$ if and only if $Y>t$, for some threshold t. Find the threshhold t .
As an intermediate step, find $p_1≜P(X=1|Y=y).$
I get the answer of $p_1 = {e^{-2*(y-1)/2}\over e^{-2*(y-1)/2}+e^{-2*(y+1)/2}}$
or is the answer, $p_1 = \frac{1}{1 + e^{-2*y}}$
| Both answers are correct.
The likelihood is defined as
$$
p \left(Y \mid X=1 \right) = \frac{1}{\sqrt{2\pi}}\, e^{\frac{-\left(y-1 \right)^2}{2}}
$$
Assuming both $X=1$ and $X=-1$ have the same probability, $p(X=1)=\frac{1}{2}$, the posterior is found with Bayes rule as following.
$$
\begin{align}
p \left(X=1 \mid Y \right) &= \frac{p \left(Y \mid X=1 \right)\cdot p \left(X=1 \right)}{p\left(Y \right)}\\
&=\frac{p \left(Y \mid X=1 \right)\cdot \frac{1}{2}}{\frac{1}{2} \cdot p\left(Y\mid X=1 \right) + \frac{1}{2} \cdot p\left(Y\mid X=-1 \right)}\\
&= \frac{\frac{1}{\sqrt{2\pi}}\, e^{\frac{-\left(y-1 \right)^2}{2}} \cdot \frac{1}{2}}{ \frac{1}{2} \cdot \frac{1}{\sqrt{2\pi}}\, e^{\frac{-\left(y-1 \right)^2}{2}} + \frac{1}{2} \cdot \frac{1}{\sqrt{2\pi}}\, e^{\frac{-\left(y+1 \right)^2}{2}} } \\
&= \frac{e^{\frac{-\left(y-1 \right)^2}{2}} }{ e^{\frac{-\left(y-1 \right)^2}{2}} + e^{\frac{-\left(y+1 \right)^2}{2}} }
\end{align}
$$
This can be manipulated further to obtain your second answer
$$
\begin{align}
p \left(X=1 \mid Y \right) &= \frac{e^{\frac{-\left(y-1 \right)^2}{2}} }{ e^{\frac{-\left(y-1 \right)^2}{2}} + e^{\frac{-\left(y+1 \right)^2}{2}} }\\
&= \frac{1}{ 1+ e^{\frac{-\left(y+1 \right)^2}{2} - \frac{-\left(y-1 \right)^2}{2}} }\\
&= \frac{1}{ 1+ e^{-2y} }
\end{align}
$$
Couldn't resist but finish also the rest of the exercise. The threshold $t$ is the value of $y$ that satisfies the following
$$
p \left(X=1 \mid Y =t\right) = 2 \cdot p \left(X=-1 \mid Y=t \right)
$$
Plugging in the formula of the posterior distribution found above, and solving for $t$ results in
$$
t = \frac{\log_e 2}{2}
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/420744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
calculate probability using joint density function I'm stuck with this question:
X,Y are random variables and thier joint density function is:
$$f_X,_Y(x,y)=2 ,0<=x<=1, 0<=y<=x$$
Now we define new random variable Z: $$Z=XY^3$$
I need to calculate the value of $$F_Z(0.3)$$
and i'm not so sure which bounds i should integrate with the joint function of X and Y.
| Let $\mathcal A_t= \left \{0 \leq x \leq 1, 0 \leq y \leq x : xy^3 \leq t \right\}$
The probability $\mathbb P(XY^3 \leq t)$ can be seen as a double integral over $\mathcal A_t$:
$$
\mathbb P(XY^3 \leq t) = 2 \int_{A_t} dxdy
$$
The condition $xy^3 \leq t$ imply that $y \leq \left( \frac{t}{x} \right)^{\frac{1}{3}}$
and $\left( \frac{t}{x} \right)^3 \leq x \Rightarrow x \geq t^{\frac{1}{4}}$.
So in the above integral, for a fixed $x$, if $x \geq t^{\frac{1}{4}}$ then integration over $y$ ranges from $0$ to $\left( \frac{t}{x} \right)^{\frac{1}{3}}$ (it is $0$ otherwise) and if $x \leq t^{\frac{1}{4}}$ it ranges from $0$ to $x$.
Hence we have:
\begin{align*}
\int_{A_t} dxdy &= \int_0^{t^{\frac{1}{4}}} x dx + \int_{t^{\frac{1}{4}}}^1\left( \frac{t}{x} \right)^{\frac{1}{3}} dx \\
&= \left[ \frac{x^2}{2} \right]_0^{t^{\frac{1}{4}}} + t^{\frac{1}{3}} \left[ \frac{3}{2} x^{\frac{2}{3}}\right ]_{t^{\frac{1}{4}}}^1 \\
&= \frac{\sqrt{t}}{2} + \frac{3}{2}t^{\frac{1}{3}}\left(1-t^{\frac{1}{6}} \right) \\
&= \frac{\sqrt{t}}{2} + \frac{3}{2}t^{\frac{1}{3}}-\frac{3\sqrt{t}}{2} \\
&=\frac{3}{2}t^{\frac{1}{3}} - \sqrt{t}
\end{align*}
Multiplying this by $2$ yields the desired probability:
\begin{align*}
\mathbb P(Z \leq t) = 3t^{\frac{1}{3}} - 2 \sqrt{t}
\end{align*}
For $t=0.3$ this gives $\mathbb P(Z \leq t) \approx 0.913$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/545025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Convergence of random variables Trying to understand the solution given to this homework problem:
Define random variables $X$ and $Y_n$ where $n=1,2\ldots%$ with probability mass functions:
$$
f_X(x)=\begin{cases}
\frac{1}{2} &\mbox{if } x = -1 \\
\frac{1}{2} &\mbox{if } x = 1 \\
0 &\mbox{otherwise}
\end{cases} and\; f_{Y_n}(y)=\begin{cases}
\frac{1}{2}-\frac{1}{n+1} &\mbox{if } y = -1 \\
\frac{1}{2}+\frac{1}{n+1} &\mbox{if } y = 1 \\
0 &\mbox{otherwise}
\end{cases}
$$
Need to show whether $Y_n$ converges to $X$ in probability.
From this I can define the probability space $\Omega=([0,1],U)$ and express the random variables as functions of indicator variables as such:
$X = 1_{\omega > \frac{1}{2}} - 1_{\omega < \frac{1}{2}}$
and
$Y_n = 1_{\omega < \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega > \frac{1}{2}+\frac{1}{n+1}}$
And from the definition of convergence in probability, we need find to show that
$P\{|Y_n-X|>\epsilon\}$ does or does not converge to zero. Which can be written as:
$P\{|1_{\omega < \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega > \frac{1}{2}+\frac{1}{n+1}} - 1_{\omega > \frac{1}{2}} + 1_{\omega < \frac{1}{2}}| > \epsilon \}\;\;(1)$
Now it's easy to see that $\epsilon < 2$ for this to hold, but the solution given states that:
$P\{|Y_n-X|>\epsilon\} = 1 - \frac{1}{n+1} \;\; (2)$
Thus $Y_n$ does not converge in probability to $X$.
My problem is that I don't see the reasoning between (1) and (2). Can anyone shed some insight into intermediate steps/reasoning required to make this step?
| You're told that
$$
P(X=1)=P(X=-1)=1/2 \, ,
$$
and
$$
P(Y_n=1)=\frac{1}{2} + \frac{1}{n+1} \;\;\;, \qquad P(Y_n=-1)=\frac{1}{2} - \frac{1}{n+1} \;\;\;,
$$
for $n\geq 1$, and you're asked whether or not $Y_n$ converges to $X$ in probability, which means that
$$
\lim_{n\to\infty} P(|Y_n-X|\geq \epsilon) = 0 \, , \qquad (*)
$$
for every $\epsilon>0$.
I will assume that $X$ is independent of the $Y_n$'s.
It is not the case that $Y_n$ converges in probability to $X$, because $(*)$ does not hold for every $\epsilon>0$.
For instance, if we take $\epsilon=1$, then
$$
P(|Y_n-X|\geq 1)=P(Y_n=1, X=-1) + P(Y_n=-1,X=1)
$$
$$
= P(Y_n=1)P(X=-1) + P(Y_n=-1)P(X=1)
$$
$$
= \left(\frac{1}{2} + \frac{1}{n+1}\right) \cdot \frac{1}{2} + \left(\frac{1}{2} - \frac{1}{n+1}\right) \cdot \frac{1}{2} = \frac{1}{2} \, ,
$$
for every $n\geq 1$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/40701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How can I calculate $\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$ Suppose $\phi(\cdot)$ and $\Phi(\cdot)$ are density function and distribution function of the standard normal distribution.
How can one calculate the integral:
$$\int^{\infty}_{-\infty}\Phi\left(\frac{w-a}{b}\right)\phi(w)\,\mathrm dw$$
| Here is another solution: We define
\begin{align*}
I(\gamma) & =\int_{-\infty}^{\infty}\Phi(\xi x+\gamma)\mathcal{N}(x|0,\sigma^{2})dx,
\end{align*}
which we can evaluate $\gamma=-\xi\mu$ to obtain our desired expression.
We know at least one function value of $I(\gamma)$, e.g., $I(0)=0$
due to symmetry. We take the derivative wrt to $\gamma$
\begin{align*}
\frac{dI}{d\gamma} & =\int_{-\infty}^{\infty}\mathcal{N}((\xi x+\gamma)|0,1)\mathcal{N}(x|0,\sigma^{2})dx\\
& =\int_{-\infty}^{\infty}\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}\left(\xi x+\gamma\right)^{2}\right)\frac{1}{\sqrt{2\pi\sigma^{2}}}\exp\left(-\frac{x^{2}}{2\sigma^{2}}\right)dx.
\end{align*}
and complete the square
\begin{align*}
\left(\xi x+\gamma\right)^{2}+\frac{x^{2}}{\sigma^{2}} & =\underbrace{\left(\xi^{2}+\sigma^{-2}\right)}_{=a}x^{2}+\underbrace{-2\gamma\xi}_{=b}x+\underbrace{\gamma^{2}}_{=c} \\
&=a\left(x-\frac{b}{2a}\right)^{2}+\left(c-\frac{b^{2}}{4a}\right)
\left(c-\frac{b^{2}}{4a}\right)\\
& =\gamma^{2}-\frac{4\gamma^{2}\xi^{2}}{4\left(\xi^{2}+\sigma^{-2}\right)}\\
&=\gamma^{2}\left(1-\frac{\xi^{2}}{\xi^{2}+\sigma^{-2}}\right)\\
&=\gamma^{2}\left(\frac{1}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
Thus,
\begin{align*}
\frac{dI}{d\gamma} & =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\int_{-\infty}^{\infty}\sqrt{\frac{a}{2\pi}}\exp\left(-\frac{1}{2}a\left(x-\frac{b}{2a}\right)^{2}\right)dx\\
& =\frac{1}{2\pi\sigma}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\sqrt{\frac{2\pi}{a}}\\
&=\frac{1}{\sqrt{2\pi\sigma^{2}a}}\exp\left(-\frac{1}{2}\left(c-\frac{b^{2}}{4a}\right)\right)\\
& =\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{\gamma^{2}}{1+\xi^{2}\sigma^{2}}\right)
\end{align*}
and integration yields
$$
\begin{align*}
I(\gamma)
&=\int_{-\infty}^{\gamma}\frac{1}{\sqrt{2\pi\left(1+\sigma^{2}\xi^{2}\right)}}\exp\left(-\frac{1}{2}\frac{z^{2}}{1+\xi^{2}\sigma^{2}}\right)dz\\
&=\Phi\left(\frac{\gamma}{\sqrt{1+\xi^{2}\sigma^{2}}}\right)
\end{align*}
$$
which implies
$$
\begin{align*}
\int_{-\infty}^{\infty}\Phi(\xi x)\mathcal{N}(x|\mu,\sigma^{2})dx
&=I(\xi\mu)\\
&=\Phi\left(\frac{\xi\mu}{\sqrt{1+\xi^{2}\sigma^{2}}}\right).
\end{align*}
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/61080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "53",
"answer_count": 3,
"answer_id": 2
} |
Conditional Expectaction 3 variables Suppose $X,Y$ and $Z$ are multivariate normal with means and full covariance matrix. The conditional expectation $E(X | Y)$ is well know. What is the conditional expectation of $E(X | Y,Z)$ if $Y$ and $Z$ (and $X$) are correlated? Standard textbooks only seem to cover the case when $Y$ and $Z$ are uncorrelated.
| If $\mathbf{x} \in \mathbb{R}^n, \mathbf{y} \in \mathbb{R}^m$ are jointly Gaussian,
\begin{align}
\begin{pmatrix}\mathbf{x} \\ \mathbf{y}\end{pmatrix}
\sim
\mathcal{N}\left(
\begin{pmatrix} \mathbf{a} \\ \mathbf{b} \end{pmatrix},
\begin{pmatrix} \mathbf{A} & \mathbf{C} \\ \mathbf{C}^\top & \mathbf{B} \end{pmatrix}
\right),
\end{align}
then (Rasmussen & Williams, 2006, Chapter A.2)
\begin{align}
\mathbf{x} \mid \mathbf{y} \sim \mathcal{N}\left(\mathbf{a} + \mathbf{CB}^{-1}(\mathbf{y} - \mathbf{b}), \mathbf{A} - \mathbf{CB}^{-1}\mathbf{C}^\top \right).
\end{align}
In your case,
\begin{align}
\mathbf{x} &= x, \\
\mathbf{y} &= \begin{pmatrix} y \\ z \end{pmatrix}, \\
\mathbf{a} &= \mu_x, \\
\mathbf{b} &= \begin{pmatrix} \mu_y \\ \mu_z\end{pmatrix}, \\
\mathbf{A} &= \sigma_{xx}^2, \\
\mathbf{B} &= \begin{pmatrix} \sigma_{yy}^2 & \sigma_{yz}^2 \\ \sigma_{zy}^2 & \sigma_{zz}^2 \end{pmatrix},\\
\mathbf{C} &= \begin{pmatrix} \sigma_{xy}^2 & \sigma_{xz}^2 \end{pmatrix},
\end{align}
where $\sigma_{xz}^2$ is the covariance between $x$ and $z$. Hence,
$$E[x \mid y, z] = \mu_x + \begin{pmatrix} \sigma_{xy}^2 & \sigma_{xz}^2 \end{pmatrix}
\begin{pmatrix} \sigma_{yy}^2 & \sigma_{yz}^2 \\ \sigma_{zy}^2 & \sigma_{zz}^2 \end{pmatrix}^{-1} \begin{pmatrix} y - \mu_y \\ z - \mu_z \end{pmatrix}.$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/68329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to calculate $E[X^2]$ for a die roll? Apparently:
$$
E[X^2] = 1^2 \cdot \frac{1}{6} + 2^2 \cdot \frac{1}{6} + 3^2\cdot\frac{1}{6}+4^2\cdot\frac{1}{6}+5^2\cdot\frac{1}{6}+6^2\cdot\frac{1}{6}
$$
where $X$ is the result of a die roll.
How come this expansion?
| There are various ways to justify it.
For example, it follows from the definition of expectation and the law of the unconscious statistician.
Or consider the case $Y=X^2$ and computing $E(Y)$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/132996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Finding the Probability of a random variable with countably infinite values So I was working on a problem where I am provided with a PMF $p_X(k)= c/3^k$ for $k=1,2,3....$
I was able to calculate $c$ using the basic property of PMF and it came to be 2. I am not able to solve the next part which states that "Find $P(X\ge k)$ for all $k=1,2,3......$.
Any suggestions?
P.S :Here is the actual question:
Let X be a discrete random variable with probability mass function $p_X(k) = c/3^k$
for k = 1, 2, ... for some
$c > 0$. Find $c$. Find $P(X\ge k)$ for all $k = 1, 2,3....$
| Consider, $$S=\frac{2}{3}+\cdots+\frac{2}{3^{k-2}}+\frac{2}{3^{k-1}}$$
multiply $S$ by $\frac{1}{3}.$ Thus,
$$\frac{1}{3}S=\frac{2}{3^{2}}+\cdots+\frac{2}{3^{k-1}}+\frac{2}{3^{k}}.$$
Subtract $S$ of $\frac{1}{3}S,$
$$\frac{2}{3}S=\frac{2}{3}-\frac{2}{3^{k}}.$$ Thus,
$$S=1-\frac{1}{3^{k-1}}.$$ Now,
if $k=1, P(X\geq k)=1,$ and if $k>1,$
\begin{eqnarray}
P(X\geq k)&=&1-P(X< k)\\
&=&1-P(X\leq k-1)\\
&=&1-\displaystyle \sum_{n=1}^{k-1}\frac{2}{3^{n}}\\
&=&1-(1-\frac{1}{3^{k-1}})\\
&=&\frac{1}{3^{k-1}}
\end{eqnarray}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/262359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limiting Sum of i.i.d. Gamma variates Let $X_1,X_2,\ldots$ be a sequence of independently and identically distributed random variables with the probability density function; $$ f(x) = \left\{ \begin{array}{ll}
\frac{1}{2}x^2 e^{-x} & \mbox{if $x>0$};\\
0 & \mbox{otherwise}.\end{array} \right. $$
Show that $$\lim_{n\to \infty} P[X_1+X_2+\ldots+X_n\ge 3(n-\sqrt{n})] \ge \frac{1}{2}$$
What I attempted
At first sight I thought it should use Chebyshev's inequality as the question is asking show a lower bound $X_1+X_2+\ldots +X_n$. However, I thought about the limit sign which is clearly indicating that the problem can be somehow related to Central Limit Theorem (CLT)
Let $S_n=X_1+X_2+\ldots +X_n$
$$E(S_n)=\sum_{i=0}^{n} E(X_i)=3n \ (\text{since } E(X_i)=3) \\ V(S_n)=\sum_{i=0}^{n} V(X_i)=3n \ (\text{since } V(X_i)=3 \text{ and } X_i \text{ are i.i.d})$$
Now,
Using CLT, for large $n$, $X_1+X_2+........+X_n \sim N(3n,3n)$
Or, $$z=\frac{S_n-3n}{\sqrt{3n}} \sim N(0,1) \text{ as } n\to \infty$$
Now, $$\lim_\limits{n\to \infty} P[X_1+X_2+........+X_n\ge 3(n-\sqrt{n})]
= \lim_\limits{n\to \infty}P(S_n-3n \ge -3\sqrt{n})
= \lim_\limits{n\to \infty} P\left(\frac{S_n-3n}{\sqrt{3n}} \ge -\sqrt{3}\right)
=P(z\ge -\sqrt{3})
=P\left(-\sqrt{3}\le z<0\right)+P(z\ge 0 )
=P\left(-\sqrt{3}\le z<0\right)+\frac{1}{2}\cdots(1)$$
Since $P(-\sqrt{3}\le z<0) \ge 0$, thus from $(1)$,
$$\lim_\limits{n\to \infty} P[X_1+X_2+........+X_n\ge 3(n-\sqrt{n})]\ge \frac{1}{2}$$
Am I correct?
| As an alternative to whuber's excellent answer, I will try to derive the exact limit of the probability in question. One of the properties of the gamma distribution is that sums of independent gamma random variables with the same rate/scale parameter are also gamma random variables with shape equal to the sum of the shapes of those variables. (That can be proved using the generating functions of the distribution.) In the present case we have $X_1,...X_n \sim \text{IID Gamma}(3,1)$, so we obtain the sum:
$$S_n \equiv X_1 + \cdots + X_n \sim \text{Gamma}(3n, 1).$$
We can therefore write the exact probability of interest using the CDF of the gamma distribution. Letting $a = 3n$ denote the shape parameter and $x = 3(n-\sqrt{n})$ denote the argument of interest, we have:
$$\begin{equation} \begin{aligned}
H(n)
&\equiv \mathbb{P}(S_n \geq 3(n-\sqrt{n})) \\[12pt]
&= \frac{\Gamma(a, x)}{\Gamma(a)} \\[6pt]
&= \frac{a \Gamma(a)}{a \Gamma(a) + x^a e^{-x}} \cdot \frac{\Gamma(a+1, x)}{\Gamma(a+1)}. \\[6pt]
\end{aligned} \end{equation}$$
To find the limit of this probability, we first note that we can write the second parameter in terms of the first as $x = a + \sqrt{2a} \cdot y$ where $y = -\sqrt{3/2}$. Using a result shown in Temme (1975) (Eqn 1.4, p. 1109) we have the asymptotic equivalence:
$$\begin{aligned}
\frac{\Gamma(a+1, x)}{\Gamma(a+1)}
&\sim \frac{1}{2} + \frac{1}{2} \cdot \text{erf}(-y) + \sqrt{\frac{2}{9a \pi}} (1+y^2) \exp( - y^2).
\end{aligned}$$
Using Stirling's approximation, and the limiting definition of the exponential number, it can also be shown that:
$$\begin{aligned}
\frac{a \Gamma(a)}{a \Gamma(a) + x^a e^{-x}}
&\sim \frac{\sqrt{2 \pi} \cdot a \cdot (a-1)^{a-1/2}}{\sqrt{2 \pi} \cdot a \cdot (a-1)^{a-1/2} + x^a \cdot e^{a-x-1}} \\[6pt]
&= \frac{\sqrt{2 \pi} \cdot a \cdot (1-\tfrac{1}{a})^{a-1/2}}{\sqrt{2 \pi} \cdot a \cdot (1-\tfrac{1}{a})^{a-1/2} + \sqrt{x} \cdot (\tfrac{x}{a})^{a-1/2} \cdot e^{a-x-1}} \\[6pt]
&= \frac{\sqrt{2 \pi} \cdot a \cdot e^{-1}}{\sqrt{2 \pi} \cdot a \cdot e^{-1} + \sqrt{x} \cdot e^{x-a} \cdot e^{a-x-1}} \\[6pt]
&= \frac{\sqrt{2 \pi} \cdot a}{\sqrt{2 \pi} \cdot a + \sqrt{x}} \\[6pt]
&\sim \frac{\sqrt{2 \pi a}}{\sqrt{2 \pi a} + 1}. \\[6pt]
\end{aligned}$$
Substituting the relevant values, we therefore obtain:
$$\begin{equation} \begin{aligned}
H(n)
&= \frac{a \Gamma(a)}{a \Gamma(a) + x^a e^{-x}} \cdot \frac{\Gamma(a+1, x)}{\Gamma(a+1)} \\[6pt]
&\sim \frac{\sqrt{2 \pi a}}{\sqrt{2 \pi a} + 1} \cdot \Bigg[ \frac{1}{2} + \frac{1}{2} \cdot \text{erf} \Big( \sqrt{\frac{3}{2}} \Big) + \sqrt{\frac{2}{9a \pi}} \cdot \frac{5}{2} \cdot \exp \Big( \frac{3}{2} \Big) \Bigg]. \\[6pt]
\end{aligned} \end{equation}$$
This gives us the limit:
$$\lim_{n \rightarrow \infty} H(n) = \frac{1}{2} + \frac{1}{2} \cdot \text{erf} \Big( \sqrt{\frac{3}{2}} \Big) = 0.9583677.$$
This gives us the exact limit of the probability of interest, which is larger than one-half.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/342704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 2,
"answer_id": 1
} |
Integrating out parameter with improper prior I got this problem while I was reading the book "Machine Learning: A Probabilistic Perspective" by Kevin Murphy. It is in section 7.6.1 of the book.
Assume the likelihood is given by
$$
\begin{split}
p(\mathbf{y}|\mathbf{X},\mathbf{w},\mu,\sigma^2) & = \mathcal{N}(\mathbf{y}|\mu+\mathbf{X}\mathbf{w}, \sigma^2\mathbf{I}_N) \\
& \propto (-\frac{1}{2\sigma^2}(\mathbf{y}-\mu\mathbf{1}_N - \mathbf{X}\mathbf{w})^T(\mathbf{y}-\mu\mathbf{1}_N - \mathbf{X}\mathbf{w}))
\end{split}
\tag{7.53}
$$
$\mu$ and $\sigma^2$ are scalars. $\mu$ serves as an offset. $\mathbf{1}_N$ is a column vector with length $N$.
We put an improper prior on $\mu$ of the form $p(u) \propto 1$ and then integrate it out to get
$$
p(\mathbf{y}|\mathbf{X},\mathbf{w},\sigma^2) \propto (-\frac{1}{2\sigma^2}||\mathbf{y}-\bar{y}\mathbf{1}_N - \mathbf{X}\mathbf{w}||_2^2)
\tag{7.54}
$$
where $\bar{y}=\frac{1}{N}\sum_{i=1}^{N}y_i$ is the empirical mean of the output.
I tried to expand the formula (last line in $7.53$) to integrate directly but failed.
Any idea or hint on how to derive from $(7.53)$ to $(7.54)$?
| This calculation assumes that the columns of the design matrix have been centred, so that:
$$(\mathbf{Xw}) \cdot \mathbf{1}_N = \mathbf{w}^\text{T} \mathbf{X}^\text{T} \mathbf{1}_N = \mathbf{w}^\text{T} \mathbf{0} = 0.$$
With this restriction you can rewrite the quadratic form as a quadratic in $\mu$ plus a term that does not depend on $\mu$ as follows:
$$\begin{equation} \begin{aligned}
|| \mathbf{y} - \mu \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2
&= || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} + (\bar{y} - \mu) \mathbf{1}_N ||^2 \\[6pt]
&= || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2
+ 2 (\bar{y} - \mu) (\mathbf{y} - \mu \mathbf{1}_N - \mathbf{X} \mathbf{w}) \cdot \mathbf{1}_N
+ (\bar{y} - \mu)^2 || \mathbf{1}_N ||^2 \\[6pt]
&= || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2
- 2 n (\bar{y} - \mu)^2 + n (\bar{y} - \mu)^2 \\[6pt]
&= || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2
- n (\mu - \bar{y})^2. \\[6pt]
\end{aligned} \end{equation}$$
Hence, with the improper prior $\pi(\mu) \propto 1$ you have:
$$\begin{equation} \begin{aligned}
p(\mathbf{y}|\mathbf{X},\mathbf{w},\sigma^2)
&= \int \limits_\mathbb{R} p(\mathbf{y}|\mathbf{X},\mathbf{w},\mu,\sigma^2) \pi(\mu) \ d \mu \\[6pt]
&\overset{\mathbf{y}}{\propto} \int \limits_\mathbb{R} \exp \Big( -\frac{1}{2\sigma^2} || \mathbf{y}-\mu\mathbf{1}_N - \mathbf{X}\mathbf{w} ||^2 \Big) \ d \mu \\[6pt]
&= \exp \Big( -\frac{1}{2\sigma^2} || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2 \Big) \int \limits_\mathbb{R} \exp \Big( -\frac{n}{2\sigma^2} (\mu - \bar{y})^2 \Big) \ d \mu \\[6pt]
&\overset{\mathbf{y}}{\propto} \exp \Big( -\frac{1}{2\sigma^2} || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2 \Big) \int \limits_\mathbb{R} \text{N} \Big( \mu \Big| \bar{y}, \frac{\sigma^2}{n} \Big) \ d \mu \\[6pt]
&= \exp \Big( -\frac{1}{2\sigma^2} || \mathbf{y} - \bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w} ||^2 \Big). \\[6pt]
\end{aligned} \end{equation}$$
Thus, your posterior distribution is:
$$\mathbf{y}|\mathbf{X},\mathbf{w},\sigma^2 \sim \text{N}(\bar{y} \mathbf{1}_N - \mathbf{X} \mathbf{w}, \sigma^2).$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/392584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cramer-Rao Lower Bound for the estimation of Pearson correlation Given a bivariate Gaussian distribution $\mathcal{N}\left(0,\begin{pmatrix}
1 & \rho \\
\rho & 1
\end{pmatrix}\right)$, I am looking for information on the distribution of $\hat{\rho}$ when estimating $\rho$ on finite sample with the Pearson estimator.
Is there any known Cramer-Rao lower bound for that?
| I did the computations on my own, but I find something different:
We consider the set of $2 \times 2$ correlation matrices
$C =
\begin{pmatrix}
1 & \theta \\
\theta & 1
\end{pmatrix}$ parameterized by $\theta$.
Let $x = \begin{pmatrix}
x_1 \\
x_2
\end{pmatrix} \in \mathbf{R}^2$.
$f(x;\theta) = \frac{1}{2\pi \sqrt{1-\theta^2}} \exp\left(-\frac{1}{2}x^\top C^{-1} x \right) =
\frac{1}{2\pi \sqrt{1-\theta^2}} \exp\left( -\frac{1}{2(1-\theta^2)}(x_1^2 + x_2^2 - 2\theta x_1 x_2) \right)$
$\log f(x;\theta) = - \log(2\pi \sqrt{1-\theta^2}) -\frac{1}{2(1-\theta^2)}(x_1^2 + x_2^2 - 2\theta x_1 x_2) $
$\frac{\partial^2 \log f(x;\theta)}{\partial \theta^2} =
-\frac{\theta^2 + 1}{(\theta^2 - 1)^2}
- \frac{x_1^2}{2(\theta+1)^3}
+ \frac{x_1^2}{2(\theta-1)^3}
- \frac{x_2^2}{2(\theta+1)^3}
+ \frac{x_2^2}{2(\theta-1)^3}
- \frac{x_1 x_2}{(\theta+1)^3}
- \frac{x_1 x_2}{(\theta-1)^3}
$
Then, we compute $\int_{-\infty}^{\infty} \frac{\partial^2 \log f(x;\theta)}{\partial \theta^2} f(x;\theta) dx$.
Since $\mathbf{E}[x_1] = \mathbf{E}[x_2] = 0$, $\mathbf{E}[x_1x_2] = \theta$, $\mathbf{E}[x_1^2] = \mathbf{E}[x_2^2] = 1$, we get
$\int_{-\infty}^{\infty} \frac{\partial^2 \log f(x;\theta)}{\partial \theta^2} f(x;\theta) dx =
- \frac{\theta^2 + 1}{(\theta^2 - 1)^2}
- \frac{1}{2(\theta+1)^3}
+ \frac{1}{2(\theta-1)^3}
- \frac{1}{2(\theta+1)^3}
+ \frac{1}{2(\theta-1)^3}
- \frac{\theta}{(\theta+1)^3}
- \frac{\theta}{(\theta-1)^3}
=
- \frac{3(\theta^2+1)}{(\theta-1)^2(\theta+1)^2}
$
Thus, $$g(\theta) = \frac{3(\theta^2+1)}{(\theta-1)^2(\theta+1)^2}.$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/195542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$
Let $X_1$, $X_2$, $\cdots$, $X_d \sim \mathcal{N}(0, 1)$ and be independent. What is the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$?
It is easy to find $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right) = \frac{1}{d}$ by symmetry. But I do not know how to find the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$. Could you please provide some hints?
What I have obtained so far
I wanted to find $\mathbb{E}\left(\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ by symmetry. But this case is different from that for $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right)$ because $\mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ may be not equal to $\mathbb{E}\left(\frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right)$. So I need some other ideas to find the expectation.
Where this question comes from
A question in mathematics stack exchange asks for the variance of $\|Ax\|_2^2$ for a unit uniform random vector $x$ on $S^{d-1}$. My derivation shows that the answer depends sorely on the values of $\mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ and $\mathbb{E}\left(\frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right)$ for $i \neq j$. Since
$$
\sum_{i \neq j}\mathbb{E} \left( \frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right) + \sum_i \mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right) = 1
$$
and by symmetry, we only need to know the value of $\mathbb{E}\left(\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ to obtain other expectations.
| The distribution of $X_i^2$ is chi-square (and also a special case of gamma).
The distribution of $\frac{X_1^2}{X_1^2 + \cdots + X_d^2}$ is thereby beta.
The expectation of the square of a beta isn't difficult.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/222915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
} |
$(2Y-1)\sqrt X\sim\mathcal N(0,1)$ when $X\sim\chi^2_{n-1}$ and $Y\sim\text{Beta}\left(\frac{n}{2}-1,\frac{n}{2}-1\right)$ independently
$X$ and $Y$ are independently distributed random variables where $X\sim\chi^2_{(n-1)}$ and $Y\sim\text{Beta}\left(\frac{n}{2}-1,\frac{n}{2}-1\right)$. What is the distribution of $Z=(2Y-1)\sqrt X$ ?
Joint density of $(X,Y)$ is given by
$$f_{X,Y}(x,y)=f_X(x)f_Y(y)=\frac{e^{-\frac{x}{2}}x^{\frac{n-1}{2}-1}}{2^{\frac{n-1}{2}}\Gamma\left(\frac{n-1}{2}\right)}\cdot\frac{y^{\frac{n}{2}-2}(1-y)^{\frac{n}{2}-2}}{B\left(\frac{n}{2}-1,\frac{n}{2}-1\right)}\mathbf1_{\{x>0\,,\,0<y<1\}}$$
Using the change of variables $(X,Y)\mapsto(Z,W)$ such that $Z=(2Y-1)\sqrt X$ and $W=\sqrt X$,
I get the joint density of $(Z,W)$ as
$$f_{Z,W}(z,w)=\frac{e^{-\frac{w^2}{2}}w^{n-3}\left(\frac{1}{4}-\frac{z^2}{4w^2}\right)^{\frac{n}{2}-2}}{2^{\frac{n-1}{2}}\Gamma\left(\frac{n-1}{2}\right)B\left(\frac{n}{2}-1,\frac{n}{2}-1\right)}\mathbf1_{\{w>0\,,\,|z|<w\}}$$
Marginal pdf of $Z$ is then
$f_Z(z)=\displaystyle\int_{|z|}^\infty f_{Z,W}(z,w)\,\mathrm{d}w$, which does not lead me anywhere.
Again, while finding the distribution function of $Z$, an incomplete beta/gamma function shows up:
$F_Z(z)=\Pr(Z\le z)$
$\quad\qquad=\Pr((2Y-1)\sqrt X\le z)=\displaystyle\iint_{(2y-1)\sqrt{x}\le z}f_{X,Y}(x,y)\,\mathrm{d}x\,\mathrm{d}y$
What is an appropriate change of variables here? Is there another way to find the distribution of $Z$?
I tried using different relations between Chi-Squared, Beta, 'F' and 't' distributions but nothing seems to work. Perhaps I am missing something obvious.
As mentioned by @Francis, this transformation is a generalization of the Box-Müller transform.
| As user @Chaconne has already done, I was able to provide an algebraic proof with this particular transformation. I have not skipped any details.
(We already have $n>2$ for the density of $Y$ to be valid).
Let us consider the transformation $(X,Y)\mapsto (U,V)$ such that $U=(2Y-1)\sqrt{X}$ and $V=X$.
This implies $x=v$ and $y=\frac{1}{2}\left(\frac{u}{\sqrt{v}}+1\right)$.
Now, $x>0\implies v>0$ and $0<y<1\implies-\sqrt{v}<u<\sqrt{v}$,
so that the bivariate support of $(U,V)$ is simply $S=\{(u,v):0<u^2<v<\infty,\,u\in\mathbb{R}\}$.
Absolute value of the Jacobian of transformation is $|J|=\frac{1}{2\sqrt{v}}$.
Joint density of $(U,V)$ is thus
$$f_{U,V}(u,v)=\frac{e^{-\frac{v}{2}}v^{\frac{n-1}{2}-1}\left(\frac{u}{\sqrt{v}}+1\right)^{\frac{n}{2}-2}\left(\frac{1}{2}-\frac{u}{2\sqrt{v}}\right)^{\frac{n}{2}-2}\Gamma(n-2)}{(2\sqrt{v})\,2^{\frac{n-1}{2}+\frac{n}{2}-2}\,\Gamma\left(\frac{n-1}{2}\right)\left(\Gamma\left(\frac{n}{2}-1\right)\right)^2}\mathbf1_{S}$$
$$=\frac{e^{-\frac{v}{2}}v^{\frac{n-4}{2}}(\sqrt{v}+u)^{\frac{n}{2}-2}(\sqrt{v}-u)^{\frac{n}{2}-2}\,\Gamma(n-2)}{2^{\frac{2n-3}{2}+\frac{n}{2}-2}\,(\sqrt{v})^{n-4}\,\Gamma\left(\frac{n-1}{2}\right)\left(\Gamma\left(\frac{n-2}{2}\right)\right)^2}\mathbf1_{S}$$
Now, using Legendre's duplication formula,
$\Gamma(n-2)=\frac{2^{n-3}}{\sqrt \pi}\Gamma\left(\frac{n-2}{2}\right)\Gamma\left(\frac{n-2}{2}+\frac{1}{2}\right)=\frac{2^{n-3}}{\sqrt \pi}\Gamma\left(\frac{n-2}{2}\right)\Gamma\left(\frac{n-1}{2}\right)$ where $n>2$.
So for $n>2$, $$f_{U,V}(u,v)=\frac{2^{n-3}\,e^{-\frac{v}{2}}(v-u^2)^{\frac{n}{2}-2}}{\sqrt \pi\,2^{\frac{3n-7}{2}}\,\Gamma\left(\frac{n}{2}-1\right)}\mathbf1_{S}$$
Marginal pdf of $U$ is then given by
$$f_U(u)=\frac{1}{2^{\frac{n-1}{2}}\sqrt \pi\,\Gamma\left(\frac{n}{2}-1\right)}\int_{u^2}^\infty e^{-\frac{v}{2}}(v-u^2)^{\frac{n}{2}-2}\,\mathrm{d}v$$
$$=\frac{e^{-\frac{u^2}{2}}}{2^{\frac{n-1}{2}}\sqrt \pi\,\Gamma\left(\frac{n}{2}-1\right)}\int_0^\infty e^{-\frac{t}{2}}\,t^{(\frac{n}{2}-1-1)}\,\mathrm{d}t$$
$$=\frac{1}{2^{\frac{n-1}{2}}\sqrt \pi\,\left(\frac{1}{2}\right)^{\frac{n}{2}-1}}e^{-\frac{u^2}{2}}$$
$$=\frac{1}{\sqrt{2\pi}}e^{-u^2/2}\,,u\in\mathbb{R}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/327499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 1
} |
Testing whether $X\sim\mathsf N(0,1)$ against the alternative that $f(x) =\frac{2}{\Gamma(1/4)}\text{exp}(−x^4)\text{ }I_{(-\infty,\infty)}(x)$
Consider the most powerful test of the null hypothesis that $X$ is a
standard normal random variable against the alternative that $X$ is a
random variable having pdf
$$f(x) =\frac{2}{\Gamma(1/4)}\text{exp}(−x^4)\text{
}I_{(-\infty,\infty)}(x)$$
and give the p-value if the observed value of $X$ is $0.6$
My try:
I think I should be using a likelihood ratio test.
I read that the Neyman–Pearson lemma states that the likelihood ratio test is the most powerful among all level $\alpha$ tests.
We have that the likelihood ratio is
$$\frac{f_0(x)}{f_1(x)}=\frac{\frac{1}{\sqrt{2\pi}}\text{exp}(-x^2/2)}{\frac{2}{\Gamma(1/4)}\text{exp}(-x^4)}=\frac{\Gamma(1/4)}{\sqrt{8\pi}}\text{exp}\left(\frac{-x^2}{2}+x^4\right)$$
Thus we accept $H_0$ if
$$\frac{\Gamma(1/4)}{\sqrt{8\pi}}\text{exp}\left(\frac{-x^2}{2}+x^4\right)\geq c$$
or equivalently if
$$\frac{-x^2}{2}+x^4 \geq \text{log}\left(\frac{\sqrt{8\pi}\cdot c}{\Gamma(1/4)}\right)$$
or equivalently if one of the following holds:
$$x^2\geq \frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}$$
$$x^2\geq \frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}$$
or equivalently if one of the following holds:
$$x\geq \sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$
$$x\leq -\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$
$$x\geq \sqrt{\frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$
$$x\leq -\sqrt{\frac{1-\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}$$
For a meaningful acceptance region we only consider the top two of the four constraints. Hence we reject if $$x\in\left(-\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}},\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}\right)$$
We wish, under the null, that the probability that $X$ assumes a value in this range to be $0.05$
For this to occur, we need
$$\sqrt{\frac{1+\sqrt{1+16\cdot\text{log}(1.382746\cdot c)}}{4}}=0.06270678$$
But software gives that there are no solutions for $c\in\mathbb{R}$. Any suggestions or confirmation of my approach would be much appreciated.
| The test will reject $H_0$ for sufficiently large values of the ratio
$$\begin{align*}
\frac{2}{\Gamma\left(\frac{1}{4}\right)}\frac{\text{exp}\left(-x^4\right)}{\frac{1}{\sqrt{2\pi}}\text{exp}\left(-\frac{x^2}{2}\right)}
&=\frac{2\sqrt{2\pi}}{\Gamma\left(\frac{1}{4}\right)}\text{exp}\left(-x^4+\frac{1}{2}x^2\right)\\\\
&=\frac{2\sqrt{2\pi}}{\Gamma\left(\frac{1}{4}\right)}e^{\frac{1}{16}}\text{exp}\left(-x^4+\frac{1}{2}x^2-\frac{1}{16}\right)\\\\
&=c\cdot\text{exp}\left(-\left[x^2-\frac{1}{4}\right]^2\right)
\end{align*}$$
where $c$ is a positive constant. For the observed value of $x$, this ratio equals
$$c\cdot\text{exp}\left(-\left[0.6^2-\frac{1}{4}\right]^2\right)=c\cdot\text{exp}(-0.0121)$$
The desired p-value is the probability, when $H_0$ is true, that $X$ assumes any value $x$ such that $$c\cdot\text{exp}\left(-\left[x^2-\frac{1}{4}\right]^2\right)\geq c\cdot\text{exp}(-0.0121)$$
Such values of $x$ satisfy
$$\left[x^2-\frac{1}{4}\right]^2\leq 0.0121\Rightarrow\left|x^2-\frac{1}{4}\right|\leq0.11$$
Hence $$-0.11\leq x^2-\frac{1}{4}\leq0.11\Rightarrow 0.14\leq x^2 \leq 0.36$$
Then $$\sqrt{0.14}\leq x \leq 0.6 \text{ or } -0.6\leq x \leq -\sqrt{0.14}$$
Because the standard normal pdf is symmetric about $0$, the desired p-value is
$$2\left[\Phi(0.6)-\Phi(\sqrt{0.14})\right]=0.16$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/379808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Probability of Weibull RV conditional on another Weibull RV Let $X, Y$ be independent Weibull distributed random variables and $x>0$ a constant. Is there a closed form solution to calculating the probability $$P(X<x|X<Y)?$$
Or maybe a way to approximate this probability?
| Letting $a_1,b_1$ and $a_2,b_2$ denote the parameters of $X$ and $Y$, and assuming that $b_1=b_2=b$,
\begin{align}
P(X<x \cap X<Y)
&=\int_0^x \int_x^\infty f_X(x)f_Y(y) dy\,dx
\\&=\int_0^x f_X(x)(1-F_Y(x))dx
\\&=\int_0^x a_1 b x^{b-1}e^{-a_1 x^b - a_2 x^b} dx
\\&=\frac{a_1}{a_1+a_2}\int_0^{(a_1+a_2)x^b}e^{-u}du
\\&=\frac{a_1}{a_1+a_2}(1-e^{-(a_1+a_2)x^b}).
\end{align}
Similarly,
$$
P(X<Y)=\int_0^\infty \int_x^\infty f_X(x)f_Y(y) dy\,dx=\frac{a_1}{a_1+a_2}
$$
and so
$$
P(X<x|X<Y)=1-e^{-(a_1+a_2)x^b}.
$$
If instead $b_1=b$ and $b_2=2b$,
\begin{align}
P(X<x \cap X<Y)
&=\int_0^x a_1 b x^{b-1}e^{-a_1 x^b - a_2 x^{2b}} dx
\\&=\frac{a_1}{\sqrt{a_2}}\int_0^{\sqrt{a_2}x^b}e^{-(u^2+\frac{a_1}{\sqrt{a_2}}u)}du
\\&=\frac{a_1}{\sqrt{a_2}}e^{\frac{a_1^2}{4a_2}}\int_0^{\sqrt{a_2}x^b}e^{-(u+\frac{a_1}{2\sqrt{a_2}})^2}du
\\&=\frac{a_1}{\sqrt{a_2}}e^{\frac{a_1^2}{4a_2}}\int_{\frac{a_1^2}{4a_2}}^{(\sqrt{a_2}x^b+\frac{a_1}{2\sqrt{a_2}})^2}v^{-\frac12}e^{-v}dv
\\&=\frac{a_1}{\sqrt{a_2}}e^{\frac{a_1^2}{4a_2}}\left(\Gamma\left(\frac12,\frac{a_1^2}{4a_2}\right)-\Gamma\left(\frac12,\left(\sqrt{a_2}x^b+\frac{a_1}{2\sqrt{a_2}}\right)^2\right)\right),
\end{align}
where $\Gamma$ is the (upper) incomplete Gamma function. A similar calculation for $P(X<Y)$ leads to
$$
P(X<x|X<Y)=1-\frac{\Gamma\left(\frac12,\left(\sqrt{a_2}x^b+\frac{a_1}{2\sqrt{a_2}}\right)^2\right)}{\Gamma\left(\frac12,\frac{a_1^2}{4a_2}\right)}.
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/513466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Calculating $\operatorname{var} \left(\frac{X_1-\bar{X}}{S}\right)$ Suppose $X_1,X_2,\ldots, X_n$ are random variables distributed independently as $N(\theta , \sigma^2)$. define $$S^2=\frac{1}{n-1}\sum_{i=1}^{n} (X_i-\bar{X})^2 ,\qquad \bar{X}=\frac{1}{n}\sum_{i=1}^{n} X_i\,.$$
Take $n=10$. How can $\operatorname{var} \left(\dfrac{X_1-\bar{X}}{S}\right)$ be calculated?
| I think it is possible to arrive at an integral representation of $\text{Var}[\frac{X_1-\bar{X}}{S}]$. First, let us express the sample mean $\bar{X}$ and the sample variance $S^2$ in terms of their counterparts for the observations other than $X_1$:
\begin{equation*}
\bar{X}_* = \frac{1}{n-1}(X_2+\ldots+ X_n) \quad\text{ and }\quad S_*^2 = \frac{1}{n-2} \sum_{i=2}^n (X_i-\bar{X}_*)^2
\end{equation*}
It is not so difficult to prove that (see also here)
\begin{equation*}
\bar{X} = \frac{1}{n} X_1 + \frac{n-1}{n} \bar{X}_* \quad\text{ and }\quad S^2 = \frac{n-2}{n-1}S_*^2 + \frac{1}{n}(X_1-\bar{X}_*)^2
\end{equation*}
We may agree that $E[\frac{X_1}{S}]=E[\frac{\bar{X}}{S}]=0$, so that $E[\frac{X_1-\bar{X}}{S}]=0$ and therefore $\text{Var}[\frac{X_1-\bar{X}}{S}] = E[\frac{(X_1-\bar{X})^2}{S^2}]$. The quantity of which we need the expectation can be rewritten as
\begin{align*}
\frac{(X_1-\bar{X})^2}{S^2} & = \frac{(X_1 - \frac{1}{n}X_1 - \frac{n-1}{n} \bar{X}_*)^2}{\frac{n-2}{n-1}S^2_* + \frac{1}{n}(X_1-\bar{X}_*)^2}\\
& = \big(\frac{n-1}{n}\big)^2 \frac{(X_1-\bar{X}_*)^2}{\frac{n-2}{n-1}S^2_* + \frac{1}{n}(X_1-\bar{X}_*)^2}
\end{align*}
The import thing now is that $X_1\sim N(\mu,\sigma^2)$, $\bar{X}_*\sim N(\mu,\frac{1}{n-1}\sigma^2)$ and $\frac{n-2}{\sigma^2}S^2_* \sim \chi^2_{n-2}$ are jointly independent. Define $Y=X_1-\bar{X}_*$, which is $N(0,\frac{n}{n-1}\sigma^2)$ and therefore $\frac{n-1}{n\sigma^2} Y^2 \sim \chi^2_1$. Then
\begin{align*}
E[\frac{(X_1-\bar{X})^2}{S^2}] & = \big(\frac{n-1}{n}\big)^2 E[\frac{Y^2}{\frac{n-2}{n-1}S^2_* + \frac{1}{n}Y^2}] = \frac{(n-1)^2}{n} E[\frac{\chi_1^2}{\chi_{n-2}^2 + \chi_1^2}]\\
\end{align*}
with $\chi_1^2$ and $\chi_{n-2}^2$ still independent. Expanding the expectation operator and using the density $f_{\chi^2_m}(x)=(\frac{x}{2})^{\frac{m}{2}-1}\frac{1}{2\Gamma(m/2)}e^{-\frac{x}{2}}$ of the $\chi^2_m$-distribution we may numerically evaluate
\begin{align*}
E[\frac{(X_1-\bar{X})^2}{S^2}] & = \frac{(n-1)^2}{n} \int_0^\infty \int_0^\infty \frac{a}{b+a} f_{\chi^2_1}(a) f_{\chi^2_{n-2}}(b) \text{d}a\text{d}b
\end{align*}
Unfortunately, I see no easy way to do this.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/168306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$
Let $X_1$, $X_2$, $\cdots$, $X_d \sim \mathcal{N}(0, 1)$ and be independent. What is the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$?
It is easy to find $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right) = \frac{1}{d}$ by symmetry. But I do not know how to find the expectation of $\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}$. Could you please provide some hints?
What I have obtained so far
I wanted to find $\mathbb{E}\left(\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ by symmetry. But this case is different from that for $\mathbb{E}\left(\frac{X_1^2}{X_1^2 + \cdots + X_d^2}\right)$ because $\mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ may be not equal to $\mathbb{E}\left(\frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right)$. So I need some other ideas to find the expectation.
Where this question comes from
A question in mathematics stack exchange asks for the variance of $\|Ax\|_2^2$ for a unit uniform random vector $x$ on $S^{d-1}$. My derivation shows that the answer depends sorely on the values of $\mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ and $\mathbb{E}\left(\frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right)$ for $i \neq j$. Since
$$
\sum_{i \neq j}\mathbb{E} \left( \frac{X_i^2X_j^2}{(X_1^2 + \cdots + X_d^2)^2}\right) + \sum_i \mathbb{E}\left(\frac{X_i^4}{(X_1^2 + \cdots + X_d^2)^2}\right) = 1
$$
and by symmetry, we only need to know the value of $\mathbb{E}\left(\frac{X_1^4}{(X_1^2 + \cdots + X_d^2)^2}\right)$ to obtain other expectations.
| This answer expands @Glen_b's answer.
Fact 1: If $X_1$, $X_2$, $\cdots$, $X_n$ are independent standard normal distribution random variables, then the sum of their squares has the chi-squared distribution with $n$ degrees of freedom. In other words,
$$
X_1^2 + \cdots + X_n^2 \sim \chi^2(n)
$$
Therefore, $X_1^2 \sim \chi^2(1)$ and $X_2^2 + \cdots + X_d^2 \sim \chi^2(d-1)$.
Fact 2: If $X \sim \chi^2(\lambda_1)$ and $Y \sim \chi^2(\lambda_2)$, then
$$
\frac{X}{X + Y} \sim \texttt{beta}(\frac{\lambda_1}{2}, \frac{\lambda_2}{2})
$$
Therefore, $Y = \frac{X_1^2}{X_1^2 + \cdots + X_d^2} \sim \texttt{beta}(\frac{1}{2}, \frac{d-1}{2})$.
Fact 3: If $X \sim \texttt{beta}(\alpha, \beta)$, then
$$
\mathbb{E}(X) = \frac{\alpha}{\alpha + \beta}
$$
and
$$
\mathbb{Var}(X) = \frac{\alpha\beta}{(\alpha + \beta)^2(\alpha + \beta + 1)}
$$
Therefore,
$$\mathbb{E}(Y) = \frac{1}{d}$$
and
$$
\mathbb{Var}(Y) = \frac{2(d-1)}{d^2(d+2)}
$$
Finally,
$$
\mathbb{E}(Y^2) = \mathbb{Var}(Y) + \mathbb{E}(Y)^2 = \frac{3d}{d^2(d+2)}.
$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/222915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
how to find expected value? The random variables X and Y have joint probability function $p(x,y)$ for $x = 0,1$ and
$y = 0,1,2$.
Suppose $3p(1,1) = p(1,2)$, and $p(1,1)$ maximizes the variance of $XY$.
Calculate the probability that $X$ or $Y$ is $0$.
Solution: Let $Z = XY$. Let $a, b$, and $c$ be the probabilities that $Z$ takes on the values $0, 1$, and $2$, respectively.
We have $b = p(1,1)$ and $c = p(1,2)$ and thus $3b = c$. And because the probabilities sum to $1, a = 1 – b – c = 1 – 4b$.
Then, $$E(Z) = b + 2c = 7b,$$$$ E(Z\cdot Z) = b + 4c = 13b.$$ Then, $$Var (Z) = 13b-19b^2.$$$$ \frac{dVar(Z)}{db}=13-98b=0 \implies b =\frac{ 13}{98}.$$ The probability that either $X$ or $Y$ is zero is the same as the probability that $Z$ is $0$ which is as $a = 1 – 4b = \frac{46}{98} = \frac{23}{49}$.
I am not sure how they got: $E(Z) = b + 2c = 7b$. Can someone explain this step where $b$ and $2c$ come from?
| \begin{align}\mathbb{E}(Z)&=0\cdot P(Z=0)+1\cdot P(Z=1)+2\cdot P(Z=2)\\
&=1\cdot P(Z=1)+2\cdot P(Z=2)\\
&=1\cdot P(X=1,Y=1)+2\cdot P(X=1,Y=2)\\
&=b+2c
\end{align}
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/313379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Coin Tossings probability I want to find the probability that in ten tossings a coin falls heads at least five times in succession. Is there any formula to compute this probability?
Answer provided is $\frac{7}{2^6}$
| Corrected answer after Orangetree pointed out I forgot to take into account events were not mutually exclusive.
You need to think about how many different coin tossing sequences give at least $5$ consecutive heads, and how many coin tossing sequences there are in total, and then take the ratio of the two.
Clearly there is $2^{10}$ coin tossing sequences in total, since each of $10$ coins have $2$ possible outcomes.
To see how many sequences have $5$ consecutive heads consider the following. In any sequence
$$X-X-X-X-X-X-X-X-X-X$$
We can put the $5$ consecutive heads in $5$ places. This can be seen by putting it at the start of thee sequence, and then moving it over step by step.
$$H-H-H-H-H-X-X-X-X-X$$
$$X-H-H-H-H-H-X-X-X-X$$
$$X-X-H-H-H-H-H-X-X-X$$
$$X-X-X-H-H-H-H-H-X-X$$
$$X-X-X-X-H-H-H-H-H-X$$
$$X-X-X-X-X-H-H-H-H-H$$
For the first scenario, we have $2^5$ possible sequences since the remaining coins can be either heads or tails without affecting the result.
For the second scenario, we can only take $2^4$ since we need to fix the first coin as tails to avoid counting sequences more than once.
Similarly, for the third scenario we can count $2^4$ un-counted sequences, as we fix the coin preceeding the sequence as tails.. and so on
This gives total number of ways
$$2^5 + 2^4 + 2^4 + 2^4 + 2^4 + 2^4 = 2^4(2+1+1+1+1+1) = 2^4\times 7$$
Hence the probability should be
$$\frac{2^4 \times 7}{2^{10}} = \frac{7}{2^6}$$
To actually answer your question if a general formula exists, the reasoning above can be extended to give the probability of at least $m$ heads out of $n$ coin tosses as
$$\frac{2^{m-1}(2+(n-m))}{2^n} = \frac{2+n-m}{2^{n-m+1}}$$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/369157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Joint distribution of X and Y bernoulli random variables A box contains two coins: a regular coin and a biased coin with $P(H)=\frac23$. I choose a coin at random and toss it once. I define the random variable X as a Bernoulli random variable associated with this coin toss, i.e., X=1 if the result of the coin toss is heads and X=0 otherwise. Then I take the remaining coin in the box and toss it once. I define the random variable Y as a Bernoulli random variable associated with the second coin toss.
a)Find the joint PMF of X and Y.
b)Are X and Y independent?
My attempt to answer this question:
Let A be the event that first coin, I pick is the regular(fair) coin. Then conditioning on that event, I can find joint PMF. Once conditioned, I can decide if X and Y are independent(conditionally).
$P(A)=\frac12,P(A^c)=\frac12$.
In the event A, $P(X=1)=\frac12,P(Y=1)=\frac23$.
In the event$A^c, P(X=1)=\frac23, P(Y=1)=\frac12$
So, $P_{X,Y}(x,y)= P(X=x, Y=y|A)P(A) + P(X=x, Y=y|A^c)P(A^c)$
$P_{X,Y}(x,y)=P_{\frac12}(x)P_{\frac23}(y)(\frac12) + P_{\frac23}(x)P_{\frac12}(y)(\frac12)$
Now, how can we find Joint PMF of X and Y using Bernoulli distribution?
| The joint pmf can be described by a 2-by-2 contingency table that shows the probabilities of getting $X=1$ and $Y=1$, $X=1$ and $Y=0$, $X=0$ and $Y=1$, $X=0$ and $Y=0$.
So you'll have:
$X=0$
$X=1$
$Y=0$
$\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{1}{3}+\frac{1}{2}\cdot\frac{1}{3}\cdot\frac{1}{2}=\frac{1}{6}$
$\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{1}{3}+\frac{1}{2}\cdot\frac{2}{3}\cdot\frac{1}{2}=\frac{1}{4}$
$Y=1$
$\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{2}{3}+\frac{1}{2}\cdot\frac{1}{3}\cdot\frac{1}{2}=\frac{1}{4}$
$\frac{1}{2}\cdot\frac{1}{2}\cdot\frac{2}{3}+\frac{1}{2}\cdot\frac{2}{3}\cdot\frac{1}{2}=\frac{1}{3}$
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/592258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A three dice roll question I got this question from an interview.
A and B are playing a game of dice as follows. A throws two dice and B throws a single die. A wins if the maximum of the two numbers is greater than the throw of B. What is the probability for A to win the game?
My solution.
If $(X,Y)$ are the two random throws of A and $Z$ is the random throw of B, then the problem asks (if I guessed correctly) to compute $P(\max(X,Y)>Z)$. But how to do it? I highly appreciate your help.
| I will take a less formal approach, in order to illustrate my thinking.
My first instinct was to visualize the usual $6 \times 6$ array of outcomes $(X,Y)$ of $A$'s dice rolls, and looking at when the larger of the two values is less than or equal to some value:
$$\begin{array}{cccccc}
(1,1) & (2,1) & (3,1) & (4,1) & (5,1) & (6,1) \\
(1,2) & (2,2) & (3,2) & (4,2) & (5,2) & (6,2) \\
(1,3) & (2,3) & (3,3) & (4,3) & (5,3) & (6,3) \\
(1,4) & (2,4) & (3,4) & (4,4) & (5,4) & (6,4) \\
(1,5) & (2,5) & (3,5) & (4,5) & (5,5) & (6,5) \\
(1,6) & (2,6) & (3,6) & (4,6) & (5,6) & (6,6) \\
\end{array}$$
It's intuitively clear from this diagram that the number of ordered pairs whose maximum is at most $k$ is $k^2$, for $k \in \{1, 2, 3, 4, 5, 6\}$. This is because geometrically, the set of such outcomes are arranged in a series of nested squares in the array. So it follows that for each of the six equiprobable outcomes for $B$'s die roll $Z \in \{1, 2, 3, 4, 5, 6\}$, $A$ will lose with probability $z^2/6^2$, hence the total probability of $A$ losing to $B$ is simply $$\frac{1^2 + \cdots + 6^2}{6^2(6)} = \frac{6(7)(13)}{6} \cdot \frac{1}{6^3} = \frac{7(13)}{6^3}.$$ Hence $A$ wins with probability $1 - \frac{7(13)}{6^3} = \frac{125}{216}$.
This line of reasoning is what I would use if I had no access to pencil or paper and had to answer the question mentally, reserving the computational part to the very end.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/600294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 9,
"answer_id": 1
} |
How to compute the standard error of the mean of an AR(1) process? I try to compute the standard error of the mean for a demeaned AR(1) process $x_{t+1} = \rho x_t + \varepsilon_{t+1} =\sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t+1-i}$
Here is what I did:
$$
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
&= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} \sum\limits_{i=0}^{\infty} \rho^i \varepsilon_{t-i}\right) \\
&= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_0 + & \rho^1 \varepsilon_{-1} + & \rho^2 \varepsilon_{-2} + & \cdots & \rho^{\infty} \varepsilon_{-\infty} + \\
\rho^0 \varepsilon_1 + & \rho^1 \varepsilon_{0} + & \rho^2 \varepsilon_{-1} + & \cdots & \rho^{\infty} \varepsilon_{1-\infty} + \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
\rho^0\varepsilon_{N-1} + & \rho^1 \varepsilon_{N-2} + & \rho^2 \varepsilon_{N-3} + & \cdots & \rho^{\infty} \varepsilon_{N-1-\infty} + \\
\end{pmatrix} \\
&= \frac{1}{N^2} Var\begin{pmatrix} \rho^0 \varepsilon_{N-1} + \\
(\rho^0 + \rho^1) \varepsilon_{N-2} + \\
(\rho^0 + \rho^1 + \rho^2) \varepsilon_{N-3} + \\
\cdots \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) \varepsilon_{1} + \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) \varepsilon_{0} + \\
(\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) \varepsilon_{-1} + \\
(\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) \varepsilon_{-2} + \\
\cdots\\
\end{pmatrix} \\
&= \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^0 + \\
(\rho^0 + \rho^1) + \\
(\rho^0 + \rho^1 + \rho^2) + \\
\cdots \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2}) + \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1}) + \\
(\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N}) + \\
(\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1}) + \\
\cdots\\
\end{pmatrix} \\
&= \frac{N \sigma_{\varepsilon}^2}{N^2} (\rho^0 + \rho^1 + \dots + \rho^{\infty}) \\
&= \frac{\sigma_{\varepsilon}^2}{N} \frac{1}{1 - \rho} \\
\end{align*}
$$
Probably, not every step is done in the most obvious way, so let me add some thoughts. In the third row, I just write out to two sum-signs. Here, the matrix has N rows. In the fourth row, I realign the matrix so that there is one row for every epsilon, so the number of rows is infinite here. Note that the last three parts in the matrix have the same number of elements, just differencing by a factor $\rho$ in each row. In the fifth row, I apply the rule that the variance of the sum of independent shocks is the sum of the variances of those shocks and notice that each $\rho^j$ element is summed up $N$ times.
The end result looks neat, but is probably wrong. Why do I think so? Because I run a MCS in R and things don't add up:
nrMCS <- 10000
N <- 100
pers <- 0.9
means <- numeric(nrMCS)
for (i in 1:nrMCS) {
means[i] <- mean(arima.sim(list(order=c(1,0,0), ar=pers), n = N))
}
#quantile(means, probs=c(0.025, 0.05, 0.5, 0.95, 0.975))
#That is the empirical standard error
sd(means)
0.9459876
#This should be the standard error according to my formula
1/(N*(1-pers))
0.1
Any hints on what I am doing wrong would be great! Or maybe a hint where I can find the correct derivation (I couldn't find anything). Is the problem maybe that I assume independence between the same errors?
$$Var(X + X) = Var(2X) = 4Var(X) \neq 2Var(X)$$
I thought about that, but don't see where I make that erroneous assumption in my derivation.
UPDATE
I forgot to square the rhos, as Nuzhi correctly pointed out. Hence it should look like:
$$ Var(\overline{x}) = \frac{\sigma_{\varepsilon}^2}{N^2} \begin{pmatrix} \rho^{2\times0} + \\
(\rho^0 + \rho^1)^2 + \\
(\rho^0 + \rho^1 + \rho^2)^2 + \\
\cdots \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-2})^2 + \\
(\rho^0 + \rho^1 + \rho^2 + \dots + \rho^{N-1})^2 + \\
(\rho^1 + \rho^2 + \rho^3 + \dots + \rho^{N})^2 + \\
(\rho^2 + \rho^3 + \rho^4 + \dots + \rho^{N+1})^2 + \\
\cdots\\
\end{pmatrix} $$
| Well actually when you take the following
\begin{align*}
Var(\overline{x}) &= Var\left(\frac{1}{N} \sum\limits_{t=0}^{N-1} x_t\right) \\
\end{align*}
It is easier to derive an implicit value rather than an explicit value in this case..your answer and mine are the same ..it's just that yours is a bit more difficult to handle because of the expansion of rho's..but some algebraic manipulation should be able to do the trick i guess ....I derived the answer as follows
since $ Var(\overline{x})$ is a linear combination.....
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}Cov\left(\sum\limits_{t=0}^{N-1}\sum\limits_{j=0}^{N-1} x_t x_j\right) \\
\end{align*}
\begin{align*}
Var(\overline{x}) &= \frac{1}{N^2}\sum\limits_{t=0}^{N-1}Var\left( x_t \right) + \frac{1}{N^2}\sum\limits_{t=0}^{N-1}\sum\limits_{j \ne t}^{N-1}Cov\left( x_t x_j\right) \\
\end{align*}
Now for an AR(1) process $Var(x_t) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2} $ and $Cov(x_tx_j) = \frac{{\sigma_{\varepsilon}}^2}{1 - \rho^2}\rho^{|j-t|} $....
Substituting in the above gives the required equation... hope this answers your question :)
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/40585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
} |
how can calculate $E(X^Y|X+Y=1)$ let $X,Y$ are two independent random variables Bernoulli with probability $P$. how can calculate $E(X^Y|X+Y=1)$
| Let's calculate all outcomes for the $X^Y$:
*
*$X=0$, $Y=0$ $\Rightarrow X^Y= 0^0 = 1$, $P=(1-p)^2$
*$X=0$, $Y=1$ $\Rightarrow X^Y= 0^1 = 0$, $P=p(1-p)$
*$X=1$, $Y=0$ $\Rightarrow X^Y= 1^0 = 1$, $P=p(1-p)$
*$X=1$, $Y=1$ $\Rightarrow X^Y= 1^1 = 1$, $P=p^2$
Condition $X+Y=1$ means we consider only the two
equally likely outcomes 2 and 3, that is, conditioned
on $X+Y = 1$, $X^Y$ is a Bernoulli random variable with
parameter $\frac{1}{2}$ and so $E[X^Y\mid X+Y = 1] = \frac{1}{2}$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/86790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Tossing 2 coins, distribution We are tossing 2 fair coins once. Then for the next toss we are only tossing the coin(s) that came up heads before.
Let $X$ be the total number of heads.
The question is $EX$ and the distribution of $X$
I tried to calculate the expected values for small $X=x$-es but it gets really complicated soon.
| Start by considering each coin in isolation, and the question becomes easier. Let $Y$ denote the number of heads for the first coin, and $Z$ denote the number of heads for the second coin, so $X=Y+Z$. $Y$ and $Z$ are identically distributed so let's just consider $Y$.
First, we know that if $Y=y$, then the first coin must have come up heads exactly $y$ times, and then tails once. The probability of $y$ heads in a row is $\left ( \frac{1}{2} \right )^{y}$, and the probability of getting tails after that is $\frac{1}{2} $. Thus:
$P(Y=y) = \left ( \frac{1}{2} \right )^{(y+1)}$
To calculate the expected value of $Y$, we sum $y \cdot P(Y=y)$ over all values of $Y$, from zero to infinity:
\begin{align}
E[Y] &= \sum_{y=0}^{\infty} y \cdot P(Y = y)\\
&= \sum_{y=0}^{\infty} y \cdot \left ( \frac{1}{2} \right )^{(y+1)} \\
&= 1
\end{align}
The expectation of the sum of two random variables is the sum of their expectations, so $E[X] = E[Y+Z] = E[Y] + E[Z] = 2$.
How do we use $P(Y)$ to get $P(X)$? Here's an example: Say $X=2$. Then we know there are three possibilities: (1) $Y=2$ and $Z=0$, (2) $Y=1$ and $Z=1$, or (3) $Y=0$ and $Z=1$. Since $Y$ and $Z$ are independent, we have:
\begin{align}
P(X=2) &= \left( \left( \frac{1}{2} \right)^{3}\cdot \frac{1}{2} \right ) + \left( \left( \frac{1}{2} \right)^{2}\cdot \left(\frac{1}{2} \right)^2 \right) + \left( \frac{1}{2} \cdot \left( \frac{1}{2} \right)^{3} \right )\\
&= 3 \cdot \left( \frac{1}{2} \right)^{4}
\end{align}
This example gives the intuition that maybe $P(X=x) = (x+1) \cdot \left( \frac{1}{2} \right)^{(x+2)}$. It is true for $X=0$: both heads have to come up tails on the first flip, and the probability of that occurring is $\frac{1}{4} = (0+1) \cdot \left( \frac{1}{2} \right)^{(0+2)}$.
It should be simple to show by induction that this is true for all values of $X$. Here is a sketch. First note that if $X=x$ there are $x+1$ possible combinations of $Y$ and $Z$ values that can produce $y + z = x$. Each value of $Y$ corresponds to a unique series of heads and a tail (and likewise for $Z$). If we iterate, and ask what values of $Y$ and $Z$ could give $y' + z' = x+1$, we can start with our original set of possible combinations of $Y$ and $Z$ values, and just add an extra head to the start of each run for the first coin, which would multiply the probability of each combination by $\frac{1}{2}$. That is, we set $y'= y+1$ and $z'=z$. Then we need to add one new term to the sum, to account for the case where $y=0$, and $z'=x+1$.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/119775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Normal Distribution Puzzle/Riddle Some time in the future, a lottery takes place, with winning number N. 3 of your friends from the future, John, Joe, and James, provide you with guesses on the number N.
John's guess a is randomly selected from a gaussian distribution centered at N with stdev x;
Joe's guess b is randomly selected from a gaussian distribution centered at N with stdev y;
James' guess c is randomly selected from a gaussian distribution centered at N with stdev z;
Given the values of a, x, b, y, c, z, what would be the best guess of N? Also define what "best" is.
| You can calculate and maximize the likelihood of N given a,b,c, with x,y,z being fixed.
The Likelihood of a value of N (the probability of sampling a,b,c given that the mean is N) is:
$LL_{a,b,c}(N) = Pr(a | x,N) \cdot Pr(b | y,N) \cdot Pr(c | z,N)$
With the distributions being independent and Gaussian, this is
$LL_{a,b,x}(N) = \frac{1}{x\sqrt{2\pi}} e^{-\frac{(a-N)^2}{2x^2}} \cdot \frac{1}{y\sqrt{2\pi}} e^{-\frac{(b-N)^2}{2y^2}} \cdot
\frac{1}{z\sqrt{2\pi}} e^{-\frac{(c-N)^2}{2z^2}} = $
$\frac{1}{xyz(\sqrt{2\pi})^3} e^{-\frac{1}{2}(\frac{(a-N)^2}{x^2} +\frac{(b-N)^2}{y^2}+\frac{(c-N)^2}{z^2})}$
And we want to find the N that maximizes this likelihood. To find the maximum, we will search for a point where the derivative of the likelihood equals zero.
$\frac{d}{dN}LL_{a,b,c}(N) = \frac{1}{xyz(\sqrt{2\pi})^3}\cdot -\frac{1}{2}(\frac{2(a-N)}{x^2} + \frac{2(b-N) }{y^2}+ \frac{2(c-N)}{z^2}) e^{-\frac{1}{2}(\frac{(a-N)^2}{x^2} +\frac{(b-N)^2}{y^2}+\frac{(c-N)^2}{z^2})}$
This equals zero if and only if
$\frac{2(a-N)}{x^2} + \frac{2(b-N) }{y^2}+ \frac{2(c-N)}{z^2} = 0$
So we get that
$y^2z^2a - y^2z^2N + x^2 z^2 b- x^2 z^2 N + x^2 y^2 c-x^2 y^2 N = 0$
$N = \frac{y^2z^2a+ x^2z^2b + x^2y^2c}{y^2z^2 + x^2z^2 + x^2y^2}$
Is the maximum likelihood estimate.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/442888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Expected rolls to roll every number on a dice an odd number of times Our family has recently learned how to play a simple game called 'Oh Dear'. Each player has six playing cards (Ace,2,3,4,5,6) turned face-up, and take turns to roll the dice. Whatever number the dice rolls, the corresponding card is turned over. The winner is the player to turn all their cards face down first, but if you roll the number of a card that has been turned face-down, then that card is turned face-up again (and you say 'Oh Dear!').
I want to work out the expected length of a game (in rolls of the dice). I'm interested first in working this out in the case of a single-player playing alone, and then also in the question of how the answer changes with multiple players. This is equivalent to working out the expected number of times a player must roll the dice to have rolled every number on the dice an odd number of times. (I assume a fair six-sided dice, but again would be interested in a more general solution too).
It is simple to work out the odds of winning as quickly as possible from any position, but I'm not sure how to go about calculating the expected number of rolls before a player would win...
| I think I've found the answer for the single player case:
If we write $e_{i}$ for the expected remaining length of the game if $i$ cards are facedown, then we can work out that:
(i). $e_{5} = \frac{1}{6}(1) + \frac{5}{6}(e_{4} + 1)$
(ii). $e_{4} = \frac{2}{6}(e_{5} + 1) + \frac{4}{6}(e_{3} + 1)$
(iii). $e_{3} = \frac{3}{6}(e_{4} + 1) + \frac{3}{6}(e_{2} + 1)$
(iv). $e_{2} = \frac{4}{6}(e_{3} + 1) + \frac{2}{6}(e_{1} + 1)$
(v). $e_{1} = \frac{5}{6}(e_{2} + 1) + \frac{1}{6}(e_{0} + 1)$
(vi). $e_{0} = \frac{6}{6}(e_{1} + 1)$
(vi) and (v) then give us (vii). $e_{1} = e_{2} + \frac{7}{5}$;
(vii) and (iv) then give us (viii). $e_{2} = e_{3} + \frac{11}{5}$;
(viii) and (iii) then give us (ix). $e_{3} = e_{4} + \frac{21}{5}$;
(ix) and (ii) then give us (x). $e_{4} = e_{5} + \frac{57}{5}$;
(x) and (i) then give us $e_{5} = 63 $
We can then add up to get $e_{0} = 63 + \frac{57}{5} + \frac{21}{5} + \frac{11}{5} + \frac{7}{5} + 1 = 83.2$.
Now, how would one generalize this to find the expected length of game with $n$ players?
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/473444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
} |
Deriving posterior from a single observation z from a normal distribution (ESL book) I am reading the book The Elements of Statistical Learning by Hastie, Tibshirani and Friedman.
On page 271 the authors derive a posterior distribution from a single observation $z\sim N(\theta, 1)$, where the prior of $\theta$ is specified as $\theta \sim N(0, \tau)$. It then follows (according to the authors) that the posterior distribution equals $\theta | z \sim N\left( \frac{z}{1+\frac{1}{\tau}}, \frac{1}{1+\frac{1}{\tau}} \right).$
Now, my calculations yield
\begin{align}
\Pr\left(\theta |\textbf{Z}\right)
&=
\frac{\Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta)}{\int \Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta)d\theta}
\propto
\Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta) \\
&=
\frac{1}{\sqrt{2\pi}}\exp\left(-\frac{1}{2}(z-\theta)^2 \right)
\frac{1}{\sqrt{2\pi\tau}}\exp\left(-\frac{1}{2\tau}\theta^2 \right) \\
&=
\frac{1}{2\pi\sqrt{\tau}}\exp\left(-\frac{1}{2} (z^2 + \theta^2 -2z\theta + \frac{\theta ^2}{\tau}) \right) \\
&=
\frac{1}{2\pi\sqrt{\tau}}\exp\left(-\frac{1}{2} (\theta^2(1+\frac{1}{\tau}) + z^2 -2z\theta) \right) \\
&=
\frac{1}{2\pi\sqrt{\tau}}\exp\left(-\frac{1}{2 \frac{1}{1+\frac{1}{\tau}}}
(\theta^2 + \frac{z^2}{1+\frac{1}{\tau}} -2 \frac{z\theta}{1+\frac{1}{\tau}} ) \right).
\end{align}
The denominator of $\frac{z^2}{1+\frac{1}{\tau}} $ should equal $(1+\frac{1}{\tau})^2$ for me to be able to "complete the square" and get
\begin{align}
\Pr\left(\theta |\textbf{Z}\right)
&\propto
\frac{1}{2\pi\sqrt{\tau}}\exp\left(-\frac{1}{2 \frac{1}{1+\frac{1}{\tau}}}
(\theta^2 + \frac{z^2}{(1+\frac{1}{\tau})^2} -2 \frac{z\theta}{1+\frac{1}{\tau}} ) \right) \\
&=\text{constant}\times\exp\left(-\frac{1}{2 \frac{1}{1+\frac{1}{\tau}}}
(\theta - \frac{z}{1+\frac{1}{\tau}})^2 \right),
\end{align}
such that $\theta | z \sim N\left( \frac{z}{1+\frac{1}{\tau}}, \frac{1}{1+\frac{1}{\tau}} \right)$.
My question is:
Where do I go wrong in the process? Should I divide with $\int \Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta)d\theta = \Pr(\textbf{Z})$? If so, what is the difference between $\Pr(\textbf{Z})$ and $\Pr\left(\textbf{Z} \mid \theta\right)$ in this given example?
Best regards,
wanderingashenvalewisp
| Since we're looking for the pdf of $\theta$, we're only concerned with terms that include it.
\begin{align}
\Pr\left(\theta |\textbf{Z}\right)
&\propto \Pr\left(\textbf{Z} \mid \theta\right) \Pr(\theta) \\
&\propto \exp\left(-\frac{1}{2}(z-\theta)^2 -\frac{1}{2\tau}\theta^2 \right) \\
&= \exp\left(-\frac{1}{2}\left((1+\frac{1}{\tau})\theta^2 -2z\theta+z^2 \right)\right)\\
&= \exp\left(-\frac{1}{2}(1+\frac{1}{\tau})\left(\theta^2 -2\frac{z}{1+\frac{1}{\tau}}\theta+\frac{z^2}{1+\frac{1}{\tau}} \right)\right)\\
&\propto \exp\left(-\frac{1}{2}(1+\frac{1}{\tau})\left(\theta - \frac{z}{1+\frac{1}{\tau}} \right)^2\right)
\end{align}
And that last line implies the desired result.
| {
"language": "en",
"url": "https://stats.stackexchange.com/questions/501858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |