Consistency of two measurements including means and standard deviations












0












$begingroup$


This is a simplified version of a real life experiment where we have done two experiments attempt to measure the same quantity and we obtained the results $0.8 pm 0.1$ and $1.2 pm 0.2.$ (That's all we know!)




How can we calculate the probability that these two measurements are consistent with each other (i.e. they are consistent with a single true value)?











share|cite|improve this question









$endgroup$












  • $begingroup$
    We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
    $endgroup$
    – David K
    Dec 7 '14 at 21:15










  • $begingroup$
    Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 2:26












  • $begingroup$
    Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
    $endgroup$
    – David K
    Dec 8 '14 at 5:24
















0












$begingroup$


This is a simplified version of a real life experiment where we have done two experiments attempt to measure the same quantity and we obtained the results $0.8 pm 0.1$ and $1.2 pm 0.2.$ (That's all we know!)




How can we calculate the probability that these two measurements are consistent with each other (i.e. they are consistent with a single true value)?











share|cite|improve this question









$endgroup$












  • $begingroup$
    We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
    $endgroup$
    – David K
    Dec 7 '14 at 21:15










  • $begingroup$
    Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 2:26












  • $begingroup$
    Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
    $endgroup$
    – David K
    Dec 8 '14 at 5:24














0












0








0





$begingroup$


This is a simplified version of a real life experiment where we have done two experiments attempt to measure the same quantity and we obtained the results $0.8 pm 0.1$ and $1.2 pm 0.2.$ (That's all we know!)




How can we calculate the probability that these two measurements are consistent with each other (i.e. they are consistent with a single true value)?











share|cite|improve this question









$endgroup$




This is a simplified version of a real life experiment where we have done two experiments attempt to measure the same quantity and we obtained the results $0.8 pm 0.1$ and $1.2 pm 0.2.$ (That's all we know!)




How can we calculate the probability that these two measurements are consistent with each other (i.e. they are consistent with a single true value)?








probability statistics statistical-inference






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 7 '14 at 20:47









Ehsan M. KermaniEhsan M. Kermani

6,40412348




6,40412348












  • $begingroup$
    We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
    $endgroup$
    – David K
    Dec 7 '14 at 21:15










  • $begingroup$
    Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 2:26












  • $begingroup$
    Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
    $endgroup$
    – David K
    Dec 8 '14 at 5:24


















  • $begingroup$
    We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
    $endgroup$
    – David K
    Dec 7 '14 at 21:15










  • $begingroup$
    Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 2:26












  • $begingroup$
    Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
    $endgroup$
    – David K
    Dec 8 '14 at 5:24
















$begingroup$
We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
$endgroup$
– David K
Dec 7 '14 at 21:15




$begingroup$
We don't know what $pm 0.1$ means. It seem likely to be a $95$% confidence region under an assumption of normal distribution, but it could be a $90$% confidence region; and if these measurements are from a "six-sigma" method then the confidence is much higher than $95$%. Are there really no other clues as to what was meant? (And, of course, what are your thoughts so far on how to do this?)
$endgroup$
– David K
Dec 7 '14 at 21:15












$begingroup$
Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
$endgroup$
– Ehsan M. Kermani
Dec 8 '14 at 2:26






$begingroup$
Dear @DavidK, as far as I know, $pm 0.1$ is for one-sigma i.e. $68%$ confidence region. There are actually no other clues. I was thinking about t-test and F-test (F distribution) given one degree of freedom, but I'm not really sure!
$endgroup$
– Ehsan M. Kermani
Dec 8 '14 at 2:26














$begingroup$
Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
$endgroup$
– David K
Dec 8 '14 at 5:24




$begingroup$
Good, it sounds like you have sample standard deviations. If you don't know sample sizes, that's a bit of a hindrance.
$endgroup$
– David K
Dec 8 '14 at 5:24










1 Answer
1






active

oldest

votes


















0












$begingroup$

If we assume that the $pm$ denotes a $x$ confidence interval and the confidence intervals are symmetrical and that there is a single true value $y$ then you can say:



$$p=begin{cases}
frac{(1-x)^2}{4},&ylt0.7\
frac{x(1-x)}{2},&0.7le yle0.9\\
frac{(1-x)^2}{4},&0.9lt ylt1.0\\
frac{x(1-x)}{2},&1.0le yle1.4\\
frac{(1-x)^2}{4},&1.4lt y\\
end{cases}$$



Now if you add all this up, you get a value which is (surprisingly) $in[0,1]$. You might be tempted to say that this is the probability that there is a single true value but you would be wrong!



And this is why - consider $y$ and $y+Delta$. For $Delta$ sufficiently large so that you are happy to say that they are distinct, the probabilities that they each fall into their respective intervals would be exactly the same - this is the definition of a confidence interval! So the probability that they are identical is equal to the probability that they are not identical for $Delta$, and $-Delta$ and $2Delta$ and an infinite number of other $Delta$ variants. Add up an infinite number of numbers $gt0$ and you will soon have a number $gt 1$ so it cannot represent a probability.



Without more information on the methodology, you can only state that the probability of a single true value is $in[0,1]$, but then, what isn't?






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks Dale. Let me ask What those probability functions are coming from?
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 18:34










  • $begingroup$
    If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
    $endgroup$
    – Dale M
    Dec 8 '14 at 22:33











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1056329%2fconsistency-of-two-measurements-including-means-and-standard-deviations%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

If we assume that the $pm$ denotes a $x$ confidence interval and the confidence intervals are symmetrical and that there is a single true value $y$ then you can say:



$$p=begin{cases}
frac{(1-x)^2}{4},&ylt0.7\
frac{x(1-x)}{2},&0.7le yle0.9\\
frac{(1-x)^2}{4},&0.9lt ylt1.0\\
frac{x(1-x)}{2},&1.0le yle1.4\\
frac{(1-x)^2}{4},&1.4lt y\\
end{cases}$$



Now if you add all this up, you get a value which is (surprisingly) $in[0,1]$. You might be tempted to say that this is the probability that there is a single true value but you would be wrong!



And this is why - consider $y$ and $y+Delta$. For $Delta$ sufficiently large so that you are happy to say that they are distinct, the probabilities that they each fall into their respective intervals would be exactly the same - this is the definition of a confidence interval! So the probability that they are identical is equal to the probability that they are not identical for $Delta$, and $-Delta$ and $2Delta$ and an infinite number of other $Delta$ variants. Add up an infinite number of numbers $gt0$ and you will soon have a number $gt 1$ so it cannot represent a probability.



Without more information on the methodology, you can only state that the probability of a single true value is $in[0,1]$, but then, what isn't?






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks Dale. Let me ask What those probability functions are coming from?
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 18:34










  • $begingroup$
    If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
    $endgroup$
    – Dale M
    Dec 8 '14 at 22:33
















0












$begingroup$

If we assume that the $pm$ denotes a $x$ confidence interval and the confidence intervals are symmetrical and that there is a single true value $y$ then you can say:



$$p=begin{cases}
frac{(1-x)^2}{4},&ylt0.7\
frac{x(1-x)}{2},&0.7le yle0.9\\
frac{(1-x)^2}{4},&0.9lt ylt1.0\\
frac{x(1-x)}{2},&1.0le yle1.4\\
frac{(1-x)^2}{4},&1.4lt y\\
end{cases}$$



Now if you add all this up, you get a value which is (surprisingly) $in[0,1]$. You might be tempted to say that this is the probability that there is a single true value but you would be wrong!



And this is why - consider $y$ and $y+Delta$. For $Delta$ sufficiently large so that you are happy to say that they are distinct, the probabilities that they each fall into their respective intervals would be exactly the same - this is the definition of a confidence interval! So the probability that they are identical is equal to the probability that they are not identical for $Delta$, and $-Delta$ and $2Delta$ and an infinite number of other $Delta$ variants. Add up an infinite number of numbers $gt0$ and you will soon have a number $gt 1$ so it cannot represent a probability.



Without more information on the methodology, you can only state that the probability of a single true value is $in[0,1]$, but then, what isn't?






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Thanks Dale. Let me ask What those probability functions are coming from?
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 18:34










  • $begingroup$
    If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
    $endgroup$
    – Dale M
    Dec 8 '14 at 22:33














0












0








0





$begingroup$

If we assume that the $pm$ denotes a $x$ confidence interval and the confidence intervals are symmetrical and that there is a single true value $y$ then you can say:



$$p=begin{cases}
frac{(1-x)^2}{4},&ylt0.7\
frac{x(1-x)}{2},&0.7le yle0.9\\
frac{(1-x)^2}{4},&0.9lt ylt1.0\\
frac{x(1-x)}{2},&1.0le yle1.4\\
frac{(1-x)^2}{4},&1.4lt y\\
end{cases}$$



Now if you add all this up, you get a value which is (surprisingly) $in[0,1]$. You might be tempted to say that this is the probability that there is a single true value but you would be wrong!



And this is why - consider $y$ and $y+Delta$. For $Delta$ sufficiently large so that you are happy to say that they are distinct, the probabilities that they each fall into their respective intervals would be exactly the same - this is the definition of a confidence interval! So the probability that they are identical is equal to the probability that they are not identical for $Delta$, and $-Delta$ and $2Delta$ and an infinite number of other $Delta$ variants. Add up an infinite number of numbers $gt0$ and you will soon have a number $gt 1$ so it cannot represent a probability.



Without more information on the methodology, you can only state that the probability of a single true value is $in[0,1]$, but then, what isn't?






share|cite|improve this answer









$endgroup$



If we assume that the $pm$ denotes a $x$ confidence interval and the confidence intervals are symmetrical and that there is a single true value $y$ then you can say:



$$p=begin{cases}
frac{(1-x)^2}{4},&ylt0.7\
frac{x(1-x)}{2},&0.7le yle0.9\\
frac{(1-x)^2}{4},&0.9lt ylt1.0\\
frac{x(1-x)}{2},&1.0le yle1.4\\
frac{(1-x)^2}{4},&1.4lt y\\
end{cases}$$



Now if you add all this up, you get a value which is (surprisingly) $in[0,1]$. You might be tempted to say that this is the probability that there is a single true value but you would be wrong!



And this is why - consider $y$ and $y+Delta$. For $Delta$ sufficiently large so that you are happy to say that they are distinct, the probabilities that they each fall into their respective intervals would be exactly the same - this is the definition of a confidence interval! So the probability that they are identical is equal to the probability that they are not identical for $Delta$, and $-Delta$ and $2Delta$ and an infinite number of other $Delta$ variants. Add up an infinite number of numbers $gt0$ and you will soon have a number $gt 1$ so it cannot represent a probability.



Without more information on the methodology, you can only state that the probability of a single true value is $in[0,1]$, but then, what isn't?







share|cite|improve this answer












share|cite|improve this answer



share|cite|improve this answer










answered Dec 8 '14 at 2:34









Dale MDale M

2,5621822




2,5621822












  • $begingroup$
    Thanks Dale. Let me ask What those probability functions are coming from?
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 18:34










  • $begingroup$
    If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
    $endgroup$
    – Dale M
    Dec 8 '14 at 22:33


















  • $begingroup$
    Thanks Dale. Let me ask What those probability functions are coming from?
    $endgroup$
    – Ehsan M. Kermani
    Dec 8 '14 at 18:34










  • $begingroup$
    If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
    $endgroup$
    – Dale M
    Dec 8 '14 at 22:33
















$begingroup$
Thanks Dale. Let me ask What those probability functions are coming from?
$endgroup$
– Ehsan M. Kermani
Dec 8 '14 at 18:34




$begingroup$
Thanks Dale. Let me ask What those probability functions are coming from?
$endgroup$
– Ehsan M. Kermani
Dec 8 '14 at 18:34












$begingroup$
If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
$endgroup$
– Dale M
Dec 8 '14 at 22:33




$begingroup$
If you construct a confidence interval for a mean, say 95%, then there is a 95% chance that the value lies within it. Therefore there is a 2.5% it lies above and a 2.5% it lies below. Since the experiments (and therefore the confidence intervals) are independent, the probabilities follow. To be totally accurate, you should partition the low and high probabilities but the maths has no real meaning and was provided to illustrate the point.
$endgroup$
– Dale M
Dec 8 '14 at 22:33


















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1056329%2fconsistency-of-two-measurements-including-means-and-standard-deviations%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Probability when a professor distributes a quiz and homework assignment to a class of n students.

Aardman Animations

Are they similar matrix