How can I detemine the joint p.d.f. of $(X,Y)$, i.e., $f_{X,Y}(x,y)$?
$begingroup$
Consider $Z=X+Y$, where $X,Y$ and $Z$ are random variables with p.d.f.s denoting $f_X(x)$, $f_Y(y)$ and $f_Z(z)$, respectively. Then, how can I detemine the joint p.d.f. of $(X,Y)$, i.e., $f_{X,Y}(x,y)$?
In addition, is there possible to calculate $f_{X,Z}(x,z)$ and $f_{Y,Z}(y,z)$?
Appreciate!
calculus probability-theory probability-distributions
$endgroup$
add a comment |
$begingroup$
Consider $Z=X+Y$, where $X,Y$ and $Z$ are random variables with p.d.f.s denoting $f_X(x)$, $f_Y(y)$ and $f_Z(z)$, respectively. Then, how can I detemine the joint p.d.f. of $(X,Y)$, i.e., $f_{X,Y}(x,y)$?
In addition, is there possible to calculate $f_{X,Z}(x,z)$ and $f_{Y,Z}(y,z)$?
Appreciate!
calculus probability-theory probability-distributions
$endgroup$
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03
add a comment |
$begingroup$
Consider $Z=X+Y$, where $X,Y$ and $Z$ are random variables with p.d.f.s denoting $f_X(x)$, $f_Y(y)$ and $f_Z(z)$, respectively. Then, how can I detemine the joint p.d.f. of $(X,Y)$, i.e., $f_{X,Y}(x,y)$?
In addition, is there possible to calculate $f_{X,Z}(x,z)$ and $f_{Y,Z}(y,z)$?
Appreciate!
calculus probability-theory probability-distributions
$endgroup$
Consider $Z=X+Y$, where $X,Y$ and $Z$ are random variables with p.d.f.s denoting $f_X(x)$, $f_Y(y)$ and $f_Z(z)$, respectively. Then, how can I detemine the joint p.d.f. of $(X,Y)$, i.e., $f_{X,Y}(x,y)$?
In addition, is there possible to calculate $f_{X,Z}(x,z)$ and $f_{Y,Z}(y,z)$?
Appreciate!
calculus probability-theory probability-distributions
calculus probability-theory probability-distributions
edited Dec 24 '18 at 17:04
Dave
asked Dec 24 '18 at 17:01
DaveDave
31529
31529
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03
add a comment |
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
In general, even if random variables $X$ and $Y$ have pdf $f_{X}$
and $f_{Y}$, it may happen that the random vector $(X,Y)$ does not
have pdf $f_{XY}$.
Let us clarify some terminoloies: Let $(Omega,mathcal{F},P)$ be
a probability space. Given a random variable $X$, its distribution
$mu_{X}$ is a Borel measure $mu_{X}:mathcal{B}(mathbb{R})rightarrow[0,1]$
defined by $mu_{X}(B)=Pleft(X^{-1}(B)right),$ $Binmathcal{B}(mathbb{R})$.
If there exists a Borel function $f_{X}:mathbb{R}rightarrowmathbb{R}$
such that $int_{B}f_{X}(x)dx=mu_{X}(B)$ for any $Binmathcal{B}(mathbb{R})$,
we say that $X$ has a pdf. Since $mu_{X}geq0$, we have that $f_{X}geq0$
($m$-a.e., where $m$ is the Lebesgue measure on $mathbb{R}$).
Moreover, $f_{X}$ is not unique but is only unique $m$-a.e. Moreover,
$X$ has pdf if and only if $mu_{X}$ is absolutely continuous with
respect to the Lebesgue measure $m$ (in the sense: $m(B)=0Rightarrowmu_{X}(B)=0$).
This setting can be extened to multi-dimensional case. For example,
the (joint) distribution $mu_{XY}$ of the random vector $(X,Y)$
is a Borel measure $mu_{XY}:mathcal{B}(mathbb{R}^{2})rightarrow[0,1]$
such that $mu_{XY}(B)=Pleft((X,Y)^{-1}(B)right)$. Here $(X,Y)$
is regarded as a map: $(X,Y):Omegarightarrowmathbb{R}^{2}$, $omegamapsto(X(omega),Y(omega))$.
Similarly, if there exists a Borel function $f_{XY}:mathbb{R}^{2}rightarrowmathbb{R}$
such that $mu_{XY}(B)=int_{B}f(x,y),dm_{2}(x,y)$, where $m_{2}$
is the Legesbue measure on $mathbb{R}^{2}$, then we say that $(X,Y)$
has a (joint) pdf. Again, $(X,Y)$ has a pdf if and only if $mu_{XY}$
is absolutely continuous with respect to $m_{2}$. In this case, the
pdf $f_{XY}$ is unique up to $m_{2}$-a.e. and $f_{XY}geq0$ $m_{2}$-a.e.
Counter-example that $X,$ $Y$ both have pdf but $(X,Y)$ does not
have pdf: Choose a probability space $(Omega,mathcal{F},P)$ such
that there exists a random variable $X:Omegarightarrowmathbb{R}$
with $Xsim N(0,1)$. Define $Y=X$. Clearly, $X$, $Y$ both have
pdf, denoted by $f_{X}$ and $f_{Y}$ (in fact, $f_{X}=f_{Y}$). We
prove that $(X,Y)$ does not have a pdf. Let $L={(t,t)mid tinmathbb{R}}$.
Note that $L$ is a Borel set and $(X,Y)^{-1}(L)=Omega$, so $mu_{XY}(L)=P(Omega)=1$.
On the other hand, $m_{2}(L)=0$. Hence $mu_{XY}$ is not absolutely
continuous with respect to $m_{2}$ and hence $(X,Y)$ does not have
a pdf.
$endgroup$
add a comment |
$begingroup$
Firstly, to find $f_{XY}(x,y)$. Your question phrases it like we have a particular senario, when $X, Y$ are independent. If this is the case, then:
$$f_{XY}(x,y) = f_X(x)f_Y(y)$$
So to find the joint distribution we simply multiply the marginal distributions.
Secondly, you ask how to find $f_{XZ}(x,z)$. In this case, we have X, Z, which are not independent (since $Z = X + Y$). Then we find it like so:
begin{align}
f_{XZ}(x,z) &= Pr(X=x and Z=z) \
&= Pr(X=x and X+Y=z) \
&= Pr(X=x and Y=z-x) \
&= f_{XY}(x,z-x) \
&= f_X(x)f_Y(z-x) \
end{align}
With the last step following from independence. Of course this is not very general, and only works in this case (since $Z=X+Y$). So when our relationship is different, as long we know the conditional distribution, then we can use Bayes Theorem, extended to PDFs.
$$f_{XZ}(x,z) = f_{Z|X}(z|x)f_X(x)$$
Of course, we must know this conditional distribution.
Although even more generally, if we were to only know two dependent marginal distributions, and allow any general relationship. There will often be infinitely many joint distributions, so it will become a lot complex. See wikipedia.
$endgroup$
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3051450%2fhow-can-i-detemine-the-joint-p-d-f-of-x-y-i-e-f-x-yx-y%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
In general, even if random variables $X$ and $Y$ have pdf $f_{X}$
and $f_{Y}$, it may happen that the random vector $(X,Y)$ does not
have pdf $f_{XY}$.
Let us clarify some terminoloies: Let $(Omega,mathcal{F},P)$ be
a probability space. Given a random variable $X$, its distribution
$mu_{X}$ is a Borel measure $mu_{X}:mathcal{B}(mathbb{R})rightarrow[0,1]$
defined by $mu_{X}(B)=Pleft(X^{-1}(B)right),$ $Binmathcal{B}(mathbb{R})$.
If there exists a Borel function $f_{X}:mathbb{R}rightarrowmathbb{R}$
such that $int_{B}f_{X}(x)dx=mu_{X}(B)$ for any $Binmathcal{B}(mathbb{R})$,
we say that $X$ has a pdf. Since $mu_{X}geq0$, we have that $f_{X}geq0$
($m$-a.e., where $m$ is the Lebesgue measure on $mathbb{R}$).
Moreover, $f_{X}$ is not unique but is only unique $m$-a.e. Moreover,
$X$ has pdf if and only if $mu_{X}$ is absolutely continuous with
respect to the Lebesgue measure $m$ (in the sense: $m(B)=0Rightarrowmu_{X}(B)=0$).
This setting can be extened to multi-dimensional case. For example,
the (joint) distribution $mu_{XY}$ of the random vector $(X,Y)$
is a Borel measure $mu_{XY}:mathcal{B}(mathbb{R}^{2})rightarrow[0,1]$
such that $mu_{XY}(B)=Pleft((X,Y)^{-1}(B)right)$. Here $(X,Y)$
is regarded as a map: $(X,Y):Omegarightarrowmathbb{R}^{2}$, $omegamapsto(X(omega),Y(omega))$.
Similarly, if there exists a Borel function $f_{XY}:mathbb{R}^{2}rightarrowmathbb{R}$
such that $mu_{XY}(B)=int_{B}f(x,y),dm_{2}(x,y)$, where $m_{2}$
is the Legesbue measure on $mathbb{R}^{2}$, then we say that $(X,Y)$
has a (joint) pdf. Again, $(X,Y)$ has a pdf if and only if $mu_{XY}$
is absolutely continuous with respect to $m_{2}$. In this case, the
pdf $f_{XY}$ is unique up to $m_{2}$-a.e. and $f_{XY}geq0$ $m_{2}$-a.e.
Counter-example that $X,$ $Y$ both have pdf but $(X,Y)$ does not
have pdf: Choose a probability space $(Omega,mathcal{F},P)$ such
that there exists a random variable $X:Omegarightarrowmathbb{R}$
with $Xsim N(0,1)$. Define $Y=X$. Clearly, $X$, $Y$ both have
pdf, denoted by $f_{X}$ and $f_{Y}$ (in fact, $f_{X}=f_{Y}$). We
prove that $(X,Y)$ does not have a pdf. Let $L={(t,t)mid tinmathbb{R}}$.
Note that $L$ is a Borel set and $(X,Y)^{-1}(L)=Omega$, so $mu_{XY}(L)=P(Omega)=1$.
On the other hand, $m_{2}(L)=0$. Hence $mu_{XY}$ is not absolutely
continuous with respect to $m_{2}$ and hence $(X,Y)$ does not have
a pdf.
$endgroup$
add a comment |
$begingroup$
In general, even if random variables $X$ and $Y$ have pdf $f_{X}$
and $f_{Y}$, it may happen that the random vector $(X,Y)$ does not
have pdf $f_{XY}$.
Let us clarify some terminoloies: Let $(Omega,mathcal{F},P)$ be
a probability space. Given a random variable $X$, its distribution
$mu_{X}$ is a Borel measure $mu_{X}:mathcal{B}(mathbb{R})rightarrow[0,1]$
defined by $mu_{X}(B)=Pleft(X^{-1}(B)right),$ $Binmathcal{B}(mathbb{R})$.
If there exists a Borel function $f_{X}:mathbb{R}rightarrowmathbb{R}$
such that $int_{B}f_{X}(x)dx=mu_{X}(B)$ for any $Binmathcal{B}(mathbb{R})$,
we say that $X$ has a pdf. Since $mu_{X}geq0$, we have that $f_{X}geq0$
($m$-a.e., where $m$ is the Lebesgue measure on $mathbb{R}$).
Moreover, $f_{X}$ is not unique but is only unique $m$-a.e. Moreover,
$X$ has pdf if and only if $mu_{X}$ is absolutely continuous with
respect to the Lebesgue measure $m$ (in the sense: $m(B)=0Rightarrowmu_{X}(B)=0$).
This setting can be extened to multi-dimensional case. For example,
the (joint) distribution $mu_{XY}$ of the random vector $(X,Y)$
is a Borel measure $mu_{XY}:mathcal{B}(mathbb{R}^{2})rightarrow[0,1]$
such that $mu_{XY}(B)=Pleft((X,Y)^{-1}(B)right)$. Here $(X,Y)$
is regarded as a map: $(X,Y):Omegarightarrowmathbb{R}^{2}$, $omegamapsto(X(omega),Y(omega))$.
Similarly, if there exists a Borel function $f_{XY}:mathbb{R}^{2}rightarrowmathbb{R}$
such that $mu_{XY}(B)=int_{B}f(x,y),dm_{2}(x,y)$, where $m_{2}$
is the Legesbue measure on $mathbb{R}^{2}$, then we say that $(X,Y)$
has a (joint) pdf. Again, $(X,Y)$ has a pdf if and only if $mu_{XY}$
is absolutely continuous with respect to $m_{2}$. In this case, the
pdf $f_{XY}$ is unique up to $m_{2}$-a.e. and $f_{XY}geq0$ $m_{2}$-a.e.
Counter-example that $X,$ $Y$ both have pdf but $(X,Y)$ does not
have pdf: Choose a probability space $(Omega,mathcal{F},P)$ such
that there exists a random variable $X:Omegarightarrowmathbb{R}$
with $Xsim N(0,1)$. Define $Y=X$. Clearly, $X$, $Y$ both have
pdf, denoted by $f_{X}$ and $f_{Y}$ (in fact, $f_{X}=f_{Y}$). We
prove that $(X,Y)$ does not have a pdf. Let $L={(t,t)mid tinmathbb{R}}$.
Note that $L$ is a Borel set and $(X,Y)^{-1}(L)=Omega$, so $mu_{XY}(L)=P(Omega)=1$.
On the other hand, $m_{2}(L)=0$. Hence $mu_{XY}$ is not absolutely
continuous with respect to $m_{2}$ and hence $(X,Y)$ does not have
a pdf.
$endgroup$
add a comment |
$begingroup$
In general, even if random variables $X$ and $Y$ have pdf $f_{X}$
and $f_{Y}$, it may happen that the random vector $(X,Y)$ does not
have pdf $f_{XY}$.
Let us clarify some terminoloies: Let $(Omega,mathcal{F},P)$ be
a probability space. Given a random variable $X$, its distribution
$mu_{X}$ is a Borel measure $mu_{X}:mathcal{B}(mathbb{R})rightarrow[0,1]$
defined by $mu_{X}(B)=Pleft(X^{-1}(B)right),$ $Binmathcal{B}(mathbb{R})$.
If there exists a Borel function $f_{X}:mathbb{R}rightarrowmathbb{R}$
such that $int_{B}f_{X}(x)dx=mu_{X}(B)$ for any $Binmathcal{B}(mathbb{R})$,
we say that $X$ has a pdf. Since $mu_{X}geq0$, we have that $f_{X}geq0$
($m$-a.e., where $m$ is the Lebesgue measure on $mathbb{R}$).
Moreover, $f_{X}$ is not unique but is only unique $m$-a.e. Moreover,
$X$ has pdf if and only if $mu_{X}$ is absolutely continuous with
respect to the Lebesgue measure $m$ (in the sense: $m(B)=0Rightarrowmu_{X}(B)=0$).
This setting can be extened to multi-dimensional case. For example,
the (joint) distribution $mu_{XY}$ of the random vector $(X,Y)$
is a Borel measure $mu_{XY}:mathcal{B}(mathbb{R}^{2})rightarrow[0,1]$
such that $mu_{XY}(B)=Pleft((X,Y)^{-1}(B)right)$. Here $(X,Y)$
is regarded as a map: $(X,Y):Omegarightarrowmathbb{R}^{2}$, $omegamapsto(X(omega),Y(omega))$.
Similarly, if there exists a Borel function $f_{XY}:mathbb{R}^{2}rightarrowmathbb{R}$
such that $mu_{XY}(B)=int_{B}f(x,y),dm_{2}(x,y)$, where $m_{2}$
is the Legesbue measure on $mathbb{R}^{2}$, then we say that $(X,Y)$
has a (joint) pdf. Again, $(X,Y)$ has a pdf if and only if $mu_{XY}$
is absolutely continuous with respect to $m_{2}$. In this case, the
pdf $f_{XY}$ is unique up to $m_{2}$-a.e. and $f_{XY}geq0$ $m_{2}$-a.e.
Counter-example that $X,$ $Y$ both have pdf but $(X,Y)$ does not
have pdf: Choose a probability space $(Omega,mathcal{F},P)$ such
that there exists a random variable $X:Omegarightarrowmathbb{R}$
with $Xsim N(0,1)$. Define $Y=X$. Clearly, $X$, $Y$ both have
pdf, denoted by $f_{X}$ and $f_{Y}$ (in fact, $f_{X}=f_{Y}$). We
prove that $(X,Y)$ does not have a pdf. Let $L={(t,t)mid tinmathbb{R}}$.
Note that $L$ is a Borel set and $(X,Y)^{-1}(L)=Omega$, so $mu_{XY}(L)=P(Omega)=1$.
On the other hand, $m_{2}(L)=0$. Hence $mu_{XY}$ is not absolutely
continuous with respect to $m_{2}$ and hence $(X,Y)$ does not have
a pdf.
$endgroup$
In general, even if random variables $X$ and $Y$ have pdf $f_{X}$
and $f_{Y}$, it may happen that the random vector $(X,Y)$ does not
have pdf $f_{XY}$.
Let us clarify some terminoloies: Let $(Omega,mathcal{F},P)$ be
a probability space. Given a random variable $X$, its distribution
$mu_{X}$ is a Borel measure $mu_{X}:mathcal{B}(mathbb{R})rightarrow[0,1]$
defined by $mu_{X}(B)=Pleft(X^{-1}(B)right),$ $Binmathcal{B}(mathbb{R})$.
If there exists a Borel function $f_{X}:mathbb{R}rightarrowmathbb{R}$
such that $int_{B}f_{X}(x)dx=mu_{X}(B)$ for any $Binmathcal{B}(mathbb{R})$,
we say that $X$ has a pdf. Since $mu_{X}geq0$, we have that $f_{X}geq0$
($m$-a.e., where $m$ is the Lebesgue measure on $mathbb{R}$).
Moreover, $f_{X}$ is not unique but is only unique $m$-a.e. Moreover,
$X$ has pdf if and only if $mu_{X}$ is absolutely continuous with
respect to the Lebesgue measure $m$ (in the sense: $m(B)=0Rightarrowmu_{X}(B)=0$).
This setting can be extened to multi-dimensional case. For example,
the (joint) distribution $mu_{XY}$ of the random vector $(X,Y)$
is a Borel measure $mu_{XY}:mathcal{B}(mathbb{R}^{2})rightarrow[0,1]$
such that $mu_{XY}(B)=Pleft((X,Y)^{-1}(B)right)$. Here $(X,Y)$
is regarded as a map: $(X,Y):Omegarightarrowmathbb{R}^{2}$, $omegamapsto(X(omega),Y(omega))$.
Similarly, if there exists a Borel function $f_{XY}:mathbb{R}^{2}rightarrowmathbb{R}$
such that $mu_{XY}(B)=int_{B}f(x,y),dm_{2}(x,y)$, where $m_{2}$
is the Legesbue measure on $mathbb{R}^{2}$, then we say that $(X,Y)$
has a (joint) pdf. Again, $(X,Y)$ has a pdf if and only if $mu_{XY}$
is absolutely continuous with respect to $m_{2}$. In this case, the
pdf $f_{XY}$ is unique up to $m_{2}$-a.e. and $f_{XY}geq0$ $m_{2}$-a.e.
Counter-example that $X,$ $Y$ both have pdf but $(X,Y)$ does not
have pdf: Choose a probability space $(Omega,mathcal{F},P)$ such
that there exists a random variable $X:Omegarightarrowmathbb{R}$
with $Xsim N(0,1)$. Define $Y=X$. Clearly, $X$, $Y$ both have
pdf, denoted by $f_{X}$ and $f_{Y}$ (in fact, $f_{X}=f_{Y}$). We
prove that $(X,Y)$ does not have a pdf. Let $L={(t,t)mid tinmathbb{R}}$.
Note that $L$ is a Borel set and $(X,Y)^{-1}(L)=Omega$, so $mu_{XY}(L)=P(Omega)=1$.
On the other hand, $m_{2}(L)=0$. Hence $mu_{XY}$ is not absolutely
continuous with respect to $m_{2}$ and hence $(X,Y)$ does not have
a pdf.
answered Dec 24 '18 at 19:24
Danny Pak-Keung ChanDanny Pak-Keung Chan
2,48638
2,48638
add a comment |
add a comment |
$begingroup$
Firstly, to find $f_{XY}(x,y)$. Your question phrases it like we have a particular senario, when $X, Y$ are independent. If this is the case, then:
$$f_{XY}(x,y) = f_X(x)f_Y(y)$$
So to find the joint distribution we simply multiply the marginal distributions.
Secondly, you ask how to find $f_{XZ}(x,z)$. In this case, we have X, Z, which are not independent (since $Z = X + Y$). Then we find it like so:
begin{align}
f_{XZ}(x,z) &= Pr(X=x and Z=z) \
&= Pr(X=x and X+Y=z) \
&= Pr(X=x and Y=z-x) \
&= f_{XY}(x,z-x) \
&= f_X(x)f_Y(z-x) \
end{align}
With the last step following from independence. Of course this is not very general, and only works in this case (since $Z=X+Y$). So when our relationship is different, as long we know the conditional distribution, then we can use Bayes Theorem, extended to PDFs.
$$f_{XZ}(x,z) = f_{Z|X}(z|x)f_X(x)$$
Of course, we must know this conditional distribution.
Although even more generally, if we were to only know two dependent marginal distributions, and allow any general relationship. There will often be infinitely many joint distributions, so it will become a lot complex. See wikipedia.
$endgroup$
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
add a comment |
$begingroup$
Firstly, to find $f_{XY}(x,y)$. Your question phrases it like we have a particular senario, when $X, Y$ are independent. If this is the case, then:
$$f_{XY}(x,y) = f_X(x)f_Y(y)$$
So to find the joint distribution we simply multiply the marginal distributions.
Secondly, you ask how to find $f_{XZ}(x,z)$. In this case, we have X, Z, which are not independent (since $Z = X + Y$). Then we find it like so:
begin{align}
f_{XZ}(x,z) &= Pr(X=x and Z=z) \
&= Pr(X=x and X+Y=z) \
&= Pr(X=x and Y=z-x) \
&= f_{XY}(x,z-x) \
&= f_X(x)f_Y(z-x) \
end{align}
With the last step following from independence. Of course this is not very general, and only works in this case (since $Z=X+Y$). So when our relationship is different, as long we know the conditional distribution, then we can use Bayes Theorem, extended to PDFs.
$$f_{XZ}(x,z) = f_{Z|X}(z|x)f_X(x)$$
Of course, we must know this conditional distribution.
Although even more generally, if we were to only know two dependent marginal distributions, and allow any general relationship. There will often be infinitely many joint distributions, so it will become a lot complex. See wikipedia.
$endgroup$
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
add a comment |
$begingroup$
Firstly, to find $f_{XY}(x,y)$. Your question phrases it like we have a particular senario, when $X, Y$ are independent. If this is the case, then:
$$f_{XY}(x,y) = f_X(x)f_Y(y)$$
So to find the joint distribution we simply multiply the marginal distributions.
Secondly, you ask how to find $f_{XZ}(x,z)$. In this case, we have X, Z, which are not independent (since $Z = X + Y$). Then we find it like so:
begin{align}
f_{XZ}(x,z) &= Pr(X=x and Z=z) \
&= Pr(X=x and X+Y=z) \
&= Pr(X=x and Y=z-x) \
&= f_{XY}(x,z-x) \
&= f_X(x)f_Y(z-x) \
end{align}
With the last step following from independence. Of course this is not very general, and only works in this case (since $Z=X+Y$). So when our relationship is different, as long we know the conditional distribution, then we can use Bayes Theorem, extended to PDFs.
$$f_{XZ}(x,z) = f_{Z|X}(z|x)f_X(x)$$
Of course, we must know this conditional distribution.
Although even more generally, if we were to only know two dependent marginal distributions, and allow any general relationship. There will often be infinitely many joint distributions, so it will become a lot complex. See wikipedia.
$endgroup$
Firstly, to find $f_{XY}(x,y)$. Your question phrases it like we have a particular senario, when $X, Y$ are independent. If this is the case, then:
$$f_{XY}(x,y) = f_X(x)f_Y(y)$$
So to find the joint distribution we simply multiply the marginal distributions.
Secondly, you ask how to find $f_{XZ}(x,z)$. In this case, we have X, Z, which are not independent (since $Z = X + Y$). Then we find it like so:
begin{align}
f_{XZ}(x,z) &= Pr(X=x and Z=z) \
&= Pr(X=x and X+Y=z) \
&= Pr(X=x and Y=z-x) \
&= f_{XY}(x,z-x) \
&= f_X(x)f_Y(z-x) \
end{align}
With the last step following from independence. Of course this is not very general, and only works in this case (since $Z=X+Y$). So when our relationship is different, as long we know the conditional distribution, then we can use Bayes Theorem, extended to PDFs.
$$f_{XZ}(x,z) = f_{Z|X}(z|x)f_X(x)$$
Of course, we must know this conditional distribution.
Although even more generally, if we were to only know two dependent marginal distributions, and allow any general relationship. There will often be infinitely many joint distributions, so it will become a lot complex. See wikipedia.
edited Dec 25 '18 at 21:39
answered Dec 24 '18 at 23:35
ptolemy0ptolemy0
155
155
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
add a comment |
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
Then how to do $f_{Z|X}(z|x)$?
$endgroup$
– Dave
Dec 25 '18 at 2:12
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
For discrete case, does pdf exist?
$endgroup$
– Danny Pak-Keung Chan
Dec 25 '18 at 18:43
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
Sorry @Dave, I see what your question is really asking. So in general, if we know X, Y dependent variables, there are infinitely many options for $f_{XY}(x,y)$. So we cannot simply calculate it. See this question.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:42
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
And only if we know the joint distribution explicity does the above work. If we did want to find it, such as in your case, wikipedia gives an answer, but this is honestly beyond my knowledge, so I can't help any further. Sorry for an annoying answer, though I've added this to my answer.
$endgroup$
– ptolemy0
Dec 25 '18 at 19:43
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
$begingroup$
Actually, I reconsidered it, and amended my answer, which I think should clear things up! Hope that is better.
$endgroup$
– ptolemy0
Dec 25 '18 at 21:40
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3051450%2fhow-can-i-detemine-the-joint-p-d-f-of-x-y-i-e-f-x-yx-y%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Convolution is one way to do this.
$endgroup$
– Sean Roberson
Dec 24 '18 at 17:03