Maximal type inequality for sum (or average) of i.i.d. random variables











up vote
1
down vote

favorite












Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.




There exists $N_0$ sufficiently large and $t_0$ sufficiently small
such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
right)leq t right} leq C t^alpha$$
or equivalently $$mathbb{P}
left{max_{1 leq k leq N} sum_{i=1}^k Z_i
- tk leq 0 right} leq C t^alpha.$$




(I know the distributions of $Z_i$'s, if this is helpful).



Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).



I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have



Theorem 2. If $g_n downarrow 0$ and
$$g_n^{-1} = O((log_2 s_n)^{1/2})$$
then we have
$$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$



Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.



Background:
The original event that I was trying to estimate is
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$

where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.



Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
$Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
$$left{inf_{1leq k leq tN} sup_{tN
leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$

and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$

for some positive $C, alpha$.



Final remark:
One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
$$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
right} leq C t^alpha$$
for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.










share|cite|improve this question




























    up vote
    1
    down vote

    favorite












    Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.




    There exists $N_0$ sufficiently large and $t_0$ sufficiently small
    such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
    left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
    right)leq t right} leq C t^alpha$$
    or equivalently $$mathbb{P}
    left{max_{1 leq k leq N} sum_{i=1}^k Z_i
    - tk leq 0 right} leq C t^alpha.$$




    (I know the distributions of $Z_i$'s, if this is helpful).



    Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).



    I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have



    Theorem 2. If $g_n downarrow 0$ and
    $$g_n^{-1} = O((log_2 s_n)^{1/2})$$
    then we have
    $$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$



    Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.



    Background:
    The original event that I was trying to estimate is
    $$left{inf_{1leq k leq tN} sup_{tN
    leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$

    where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.



    Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
    $Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
    $$left{inf_{1leq k leq tN} sup_{tN
    leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$

    and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
    leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
    right} leq C t^alpha$$

    for some positive $C, alpha$.



    Final remark:
    One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
    $$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
    leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
    right} leq C t^alpha$$
    for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.










    share|cite|improve this question


























      up vote
      1
      down vote

      favorite









      up vote
      1
      down vote

      favorite











      Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.




      There exists $N_0$ sufficiently large and $t_0$ sufficiently small
      such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
      left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
      right)leq t right} leq C t^alpha$$
      or equivalently $$mathbb{P}
      left{max_{1 leq k leq N} sum_{i=1}^k Z_i
      - tk leq 0 right} leq C t^alpha.$$




      (I know the distributions of $Z_i$'s, if this is helpful).



      Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).



      I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have



      Theorem 2. If $g_n downarrow 0$ and
      $$g_n^{-1} = O((log_2 s_n)^{1/2})$$
      then we have
      $$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$



      Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.



      Background:
      The original event that I was trying to estimate is
      $$left{inf_{1leq k leq tN} sup_{tN
      leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$

      where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.



      Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
      $Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
      $$left{inf_{1leq k leq tN} sup_{tN
      leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$

      and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
      leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
      right} leq C t^alpha$$

      for some positive $C, alpha$.



      Final remark:
      One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
      $$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
      leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
      right} leq C t^alpha$$
      for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.










      share|cite|improve this question















      Let $Z_i$ be i.i.d. random variables with $mathbb{E}[Z_i] = 0$ and $mathbb{E}|Z_i|^p< infty$ for $p=1,2,3,cdots$. I am looking for the following type of estimate if possible, and it is not like the concentration inequalities that one normally sees.




      There exists $N_0$ sufficiently large and $t_0$ sufficiently small
      such that for all $Ngeq N_0$ and $1/N<tleq t_0$, we have $$mathbb{P}
      left{max_{1 leq k leq N} left( frac{1}{k}sum_{i=1}^k Z_i
      right)leq t right} leq C t^alpha$$
      or equivalently $$mathbb{P}
      left{max_{1 leq k leq N} sum_{i=1}^k Z_i
      - tk leq 0 right} leq C t^alpha.$$




      (I know the distributions of $Z_i$'s, if this is helpful).



      Is there a name for this type of inequality where we look at the maximum of the averages (or the sum of i.i.d. random variables but we can not move the constant to the other side, like in $star$ above).



      I found a related general results in this paper by Chung; here the mean zero random variables are only assumed to be independent. With his notation, $S_n^* = max_{1leq kleq n} |S_n|$, and $s_n = text{Var}[S_n]$ which is $Cn$ in the i.i.d. case, we have



      Theorem 2. If $g_n downarrow 0$ and
      $$g_n^{-1} = O((log_2 s_n)^{1/2})$$
      then we have
      $$mathbb{P}(S_n^* < g_ns_n) = (1+o(1)) expleft(-frac{pi^2}{8g_n^2}.right)$$



      Is there a simpler inequality of this type for i.i.d. random variables? The proof of this inequality in his general setting is quite technical.



      Background:
      The original event that I was trying to estimate is
      $$left{inf_{1leq k leq tN} sup_{tN
      leq l leq N}sum_{i=k+1}^l X_i - Y_i leq 0right}$$

      where $X_i sim exp(rho)$, and $Y_i sim exp(rho- t)$ all independent of each other.



      Like Kolmogorov or Doob's maximal inequality, maybe it is helpful to center the random variables; by defining
      $Z_i = X_i - Y_i - mathbb{E}[X_i - Y_i] $, we get the centered version
      $$left{inf_{1leq k leq tN} sup_{tN
      leq l leq N}sum_{i=k+1}^l left(Z_i - frac{t}{rho(rho-t)} right) leq 0 right},$$

      and this boils down to estimate $$mathbb{P} left{inf_{1leq k leq tN} sup_{tN
      leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
      right} leq C t^alpha$$

      for some positive $C, alpha$.



      Final remark:
      One way to get some kind of tail estimate is to go to Brownian motion using Donsker's theorem, and we could obtain
      $$limsup_{Nrightarrow infty} mathbb{P} left{inf_{1leq k leq tN} sup_{tN
      leq l leq N} left( frac{1}{l-k}sum_{i=k+1}^l Z_i right)leq t
      right} leq C t^alpha$$
      for all $tin (0, t_0)$. In this case, the $N_0$ would be dependent on $t$ so instead of $``Ngeq N_0"$ we have to use $``limsup_N"$, and I am trying to avoid this.







      real-analysis probability-theory






      share|cite|improve this question















      share|cite|improve this question













      share|cite|improve this question




      share|cite|improve this question








      edited 21 hours ago

























      asked yesterday









      Xiao

      4,55711334




      4,55711334



























          active

          oldest

          votes











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "69"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          noCode: true, onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














           

          draft saved


          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995664%2fmaximal-type-inequality-for-sum-or-average-of-i-i-d-random-variables%23new-answer', 'question_page');
          }
          );

          Post as a guest





































          active

          oldest

          votes













          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes
















           

          draft saved


          draft discarded



















































           


          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2995664%2fmaximal-type-inequality-for-sum-or-average-of-i-i-d-random-variables%23new-answer', 'question_page');
          }
          );

          Post as a guest




















































































          Popular posts from this blog

          Aardman Animations

          Are they similar matrix

          “minimization” problem in Euclidean space related to orthonormal basis