Variance of average of $n$ correlated random variables
$begingroup$
Reading about deep leaning, I came across the following formula.
$$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$
where $X_1, dots, X_n$ are identically distributed random variables with
pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.
- How to derive this?
- How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?
machine-learning deep-learning bootstrap regularization bagging
$endgroup$
add a comment |
$begingroup$
Reading about deep leaning, I came across the following formula.
$$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$
where $X_1, dots, X_n$ are identically distributed random variables with
pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.
- How to derive this?
- How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?
machine-learning deep-learning bootstrap regularization bagging
$endgroup$
add a comment |
$begingroup$
Reading about deep leaning, I came across the following formula.
$$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$
where $X_1, dots, X_n$ are identically distributed random variables with
pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.
- How to derive this?
- How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?
machine-learning deep-learning bootstrap regularization bagging
$endgroup$
Reading about deep leaning, I came across the following formula.
$$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$
where $X_1, dots, X_n$ are identically distributed random variables with
pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.
- How to derive this?
- How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?
machine-learning deep-learning bootstrap regularization bagging
machine-learning deep-learning bootstrap regularization bagging
edited Feb 10 at 18:42
Rodrigo de Azevedo
730513
730513
asked Feb 10 at 14:58
OmegaDOmegaD
536
536
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
By definition, we have
$$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$
which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:
$$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$
Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.
$endgroup$
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "65"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f391740%2fvariance-of-average-of-n-correlated-random-variables%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
By definition, we have
$$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$
which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:
$$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$
Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.
$endgroup$
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
add a comment |
$begingroup$
By definition, we have
$$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$
which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:
$$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$
Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.
$endgroup$
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
add a comment |
$begingroup$
By definition, we have
$$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$
which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:
$$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$
Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.
$endgroup$
By definition, we have
$$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$
which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:
$$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$
Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.
edited Feb 10 at 19:14
StubbornAtom
2,6451531
2,6451531
answered Feb 10 at 15:59
gunesgunes
5,7751115
5,7751115
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
add a comment |
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
$begingroup$
Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
$endgroup$
– OmegaD
Feb 10 at 16:15
1
1
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
$begingroup$
@OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
$endgroup$
– gunes
Feb 10 at 16:31
add a comment |
Thanks for contributing an answer to Cross Validated!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f391740%2fvariance-of-average-of-n-correlated-random-variables%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown