Maximal type inequality for sum (or average) of i.i.d. random variables
up vote
1
down vote
favorite
Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.
There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
right)leq t right} leq C t^alpha$$ or equivalently $$mathbb{P}
left{max_{1 leq k leq N} sum_{i=1}^k Z_i
- tk leq 0 right} leq C t^alpha.$$
(I know the distributions of $Z_i$'s, if this is helpful).
Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).
I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have
Theorem 2. If $g_n downarrow 0$ and
$$g_n^{-1} = O((log_2 s_n)^{1/2})$$
then we have
$$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$
Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.
Background:
The original event that I was trying to estimate is
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$
where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.
Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$
and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$
for some positive $C, alpha$.
Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$ for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.
real-analysis probability-theory
add a comment |
up vote
1
down vote
favorite
Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.
There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
right)leq t right} leq C t^alpha$$ or equivalently $$mathbb{P}
left{max_{1 leq k leq N} sum_{i=1}^k Z_i
- tk leq 0 right} leq C t^alpha.$$
(I know the distributions of $Z_i$'s, if this is helpful).
Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).
I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have
Theorem 2. If $g_n downarrow 0$ and
$$g_n^{-1} = O((log_2 s_n)^{1/2})$$
then we have
$$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$
Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.
Background:
The original event that I was trying to estimate is
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$
where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.
Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$
and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$
for some positive $C, alpha$.
Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$ for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.
real-analysis probability-theory
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.
There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
right)leq t right} leq C t^alpha$$ or equivalently $$mathbb{P}
left{max_{1 leq k leq N} sum_{i=1}^k Z_i
- tk leq 0 right} leq C t^alpha.$$
(I know the distributions of $Z_i$'s, if this is helpful).
Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).
I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have
Theorem 2. If $g_n downarrow 0$ and
$$g_n^{-1} = O((log_2 s_n)^{1/2})$$
then we have
$$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$
Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.
Background:
The original event that I was trying to estimate is
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$
where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.
Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$
and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$
for some positive $C, alpha$.
Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$ for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.
real-analysis probability-theory
Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.
There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
right)leq t right} leq C t^alpha$$ or equivalently $$mathbb{P}
left{max_{1 leq k leq N} sum_{i=1}^k Z_i
- tk leq 0 right} leq C t^alpha.$$
(I know the distributions of $Z_i$'s, if this is helpful).
Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).
I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have
Theorem 2. If $g_n downarrow 0$ and
$$g_n^{-1} = O((log_2 s_n)^{1/2})$$
then we have
$$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$
Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.
Background:
The original event that I was trying to estimate is
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$
where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.
Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$
and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$
for some positive $C, alpha$.
Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$ for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.
real-analysis probability-theory
real-analysis probability-theory
edited 21 hours ago
asked yesterday
Xiao
4,55711334
4,55711334
add a comment |
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995664%2fmaximal-type-inequality-for-sum-or-average-of-i-i-d-random-variables%23new-answer', 'question_page');
}
);
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password