How to analytically compute KL divergence of two Gaussian distributions?












1












$begingroup$


Consider two multi-variate Gaussian distributions, $p(x)=mathcal N(x;mu_p, sigma_p^2)$ and $q(x)=mathcal N(x; mu_q, sigma_q^2)$. It seems the KL-divergence of these two Gaussian distributions $D_{KL}(p(x)Vert q(x))$ can be calculated analytically(according to the paper "Auto-Encoding Variational Bayes"), but I don't know how. "Auto-Encoding Variational Bayes" gives the result of a more special case where $q(x)=mathcal N(x;mathbf 0, mathbf I)$, but no detailed procedure is provided.



Ps. I know how univariate Gaussian is derived.



$$
begin{align}
D_{KL}(p(x)Vert q(x))&=int p(x)log {p(x)over q(x)}dx\
&=int p(x)log {{1over sqrt{2pi sigma_p^2}}expleft(-{(x-mu_p)^2over 2sigma_p^2}right)over{1over sqrt{2pi sigma_q^2}}expleft(-{(x-mu_q)^2over 2sigma_q^2}right)}dx\
&={1over 2}intlog{sigma_q^2over sigma_p^2}p(x)dx - {1over 2sigma_p^2}int(x-mu_p)^2p(x)dx+{1over2sigma_q^2}int(x-mu_q)^2p(x)dx\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}int(x-mu_p + mu_p - mu_q)^2p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}intleft((x-mu_p)^2 + 2(x-mu_p)(mu_p-mu_q) + (mu_p - mu_q)^2right)p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}left(sigma_p^2 + (mu_p - mu_q)^2right)right)\
end{align}
$$



but I'm not very familiar with the multi-variate case, it seems the result involves the sum over every variate:



$$
D_{KL}(p(x)Vert q(x))={1over 2}sum_{jin J}left(log{sigma_{q, j}^2over sigma_{p,j}^2} - 1 + {1oversigma_{q,j}^2}left(sigma_{p,j}^2 + (mu_{p,j} - mu_{q,j})^2right)right)\
$$



where $J$ is the dimension of $x$. But why do we sum over all the variates?



Please help me sort this out. Thanks in advance!










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    stats.stackexchange.com/questions/60680/…
    $endgroup$
    – Stelios
    Aug 21 '18 at 21:36
















1












$begingroup$


Consider two multi-variate Gaussian distributions, $p(x)=mathcal N(x;mu_p, sigma_p^2)$ and $q(x)=mathcal N(x; mu_q, sigma_q^2)$. It seems the KL-divergence of these two Gaussian distributions $D_{KL}(p(x)Vert q(x))$ can be calculated analytically(according to the paper "Auto-Encoding Variational Bayes"), but I don't know how. "Auto-Encoding Variational Bayes" gives the result of a more special case where $q(x)=mathcal N(x;mathbf 0, mathbf I)$, but no detailed procedure is provided.



Ps. I know how univariate Gaussian is derived.



$$
begin{align}
D_{KL}(p(x)Vert q(x))&=int p(x)log {p(x)over q(x)}dx\
&=int p(x)log {{1over sqrt{2pi sigma_p^2}}expleft(-{(x-mu_p)^2over 2sigma_p^2}right)over{1over sqrt{2pi sigma_q^2}}expleft(-{(x-mu_q)^2over 2sigma_q^2}right)}dx\
&={1over 2}intlog{sigma_q^2over sigma_p^2}p(x)dx - {1over 2sigma_p^2}int(x-mu_p)^2p(x)dx+{1over2sigma_q^2}int(x-mu_q)^2p(x)dx\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}int(x-mu_p + mu_p - mu_q)^2p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}intleft((x-mu_p)^2 + 2(x-mu_p)(mu_p-mu_q) + (mu_p - mu_q)^2right)p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}left(sigma_p^2 + (mu_p - mu_q)^2right)right)\
end{align}
$$



but I'm not very familiar with the multi-variate case, it seems the result involves the sum over every variate:



$$
D_{KL}(p(x)Vert q(x))={1over 2}sum_{jin J}left(log{sigma_{q, j}^2over sigma_{p,j}^2} - 1 + {1oversigma_{q,j}^2}left(sigma_{p,j}^2 + (mu_{p,j} - mu_{q,j})^2right)right)\
$$



where $J$ is the dimension of $x$. But why do we sum over all the variates?



Please help me sort this out. Thanks in advance!










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    stats.stackexchange.com/questions/60680/…
    $endgroup$
    – Stelios
    Aug 21 '18 at 21:36














1












1








1





$begingroup$


Consider two multi-variate Gaussian distributions, $p(x)=mathcal N(x;mu_p, sigma_p^2)$ and $q(x)=mathcal N(x; mu_q, sigma_q^2)$. It seems the KL-divergence of these two Gaussian distributions $D_{KL}(p(x)Vert q(x))$ can be calculated analytically(according to the paper "Auto-Encoding Variational Bayes"), but I don't know how. "Auto-Encoding Variational Bayes" gives the result of a more special case where $q(x)=mathcal N(x;mathbf 0, mathbf I)$, but no detailed procedure is provided.



Ps. I know how univariate Gaussian is derived.



$$
begin{align}
D_{KL}(p(x)Vert q(x))&=int p(x)log {p(x)over q(x)}dx\
&=int p(x)log {{1over sqrt{2pi sigma_p^2}}expleft(-{(x-mu_p)^2over 2sigma_p^2}right)over{1over sqrt{2pi sigma_q^2}}expleft(-{(x-mu_q)^2over 2sigma_q^2}right)}dx\
&={1over 2}intlog{sigma_q^2over sigma_p^2}p(x)dx - {1over 2sigma_p^2}int(x-mu_p)^2p(x)dx+{1over2sigma_q^2}int(x-mu_q)^2p(x)dx\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}int(x-mu_p + mu_p - mu_q)^2p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}intleft((x-mu_p)^2 + 2(x-mu_p)(mu_p-mu_q) + (mu_p - mu_q)^2right)p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}left(sigma_p^2 + (mu_p - mu_q)^2right)right)\
end{align}
$$



but I'm not very familiar with the multi-variate case, it seems the result involves the sum over every variate:



$$
D_{KL}(p(x)Vert q(x))={1over 2}sum_{jin J}left(log{sigma_{q, j}^2over sigma_{p,j}^2} - 1 + {1oversigma_{q,j}^2}left(sigma_{p,j}^2 + (mu_{p,j} - mu_{q,j})^2right)right)\
$$



where $J$ is the dimension of $x$. But why do we sum over all the variates?



Please help me sort this out. Thanks in advance!










share|cite|improve this question











$endgroup$




Consider two multi-variate Gaussian distributions, $p(x)=mathcal N(x;mu_p, sigma_p^2)$ and $q(x)=mathcal N(x; mu_q, sigma_q^2)$. It seems the KL-divergence of these two Gaussian distributions $D_{KL}(p(x)Vert q(x))$ can be calculated analytically(according to the paper "Auto-Encoding Variational Bayes"), but I don't know how. "Auto-Encoding Variational Bayes" gives the result of a more special case where $q(x)=mathcal N(x;mathbf 0, mathbf I)$, but no detailed procedure is provided.



Ps. I know how univariate Gaussian is derived.



$$
begin{align}
D_{KL}(p(x)Vert q(x))&=int p(x)log {p(x)over q(x)}dx\
&=int p(x)log {{1over sqrt{2pi sigma_p^2}}expleft(-{(x-mu_p)^2over 2sigma_p^2}right)over{1over sqrt{2pi sigma_q^2}}expleft(-{(x-mu_q)^2over 2sigma_q^2}right)}dx\
&={1over 2}intlog{sigma_q^2over sigma_p^2}p(x)dx - {1over 2sigma_p^2}int(x-mu_p)^2p(x)dx+{1over2sigma_q^2}int(x-mu_q)^2p(x)dx\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}int(x-mu_p + mu_p - mu_q)^2p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}intleft((x-mu_p)^2 + 2(x-mu_p)(mu_p-mu_q) + (mu_p - mu_q)^2right)p(x)dxright)\
&={1over 2}left(log{sigma_q^2over sigma_p^2} - 1 + {1oversigma_q^2}left(sigma_p^2 + (mu_p - mu_q)^2right)right)\
end{align}
$$



but I'm not very familiar with the multi-variate case, it seems the result involves the sum over every variate:



$$
D_{KL}(p(x)Vert q(x))={1over 2}sum_{jin J}left(log{sigma_{q, j}^2over sigma_{p,j}^2} - 1 + {1oversigma_{q,j}^2}left(sigma_{p,j}^2 + (mu_{p,j} - mu_{q,j})^2right)right)\
$$



where $J$ is the dimension of $x$. But why do we sum over all the variates?



Please help me sort this out. Thanks in advance!







calculus probability integration entropy






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Aug 21 '18 at 21:54







Sherwin Chen

















asked Aug 20 '18 at 3:37









Sherwin ChenSherwin Chen

1657




1657








  • 1




    $begingroup$
    stats.stackexchange.com/questions/60680/…
    $endgroup$
    – Stelios
    Aug 21 '18 at 21:36














  • 1




    $begingroup$
    stats.stackexchange.com/questions/60680/…
    $endgroup$
    – Stelios
    Aug 21 '18 at 21:36








1




1




$begingroup$
stats.stackexchange.com/questions/60680/…
$endgroup$
– Stelios
Aug 21 '18 at 21:36




$begingroup$
stats.stackexchange.com/questions/60680/…
$endgroup$
– Stelios
Aug 21 '18 at 21:36










1 Answer
1






active

oldest

votes


















0












$begingroup$

According to http://101.110.118.57/stanford.edu/~jduchi/projects/general_notes.pdf, the KL divergence for two multivariate Gaussians in $mathbb R^n$ is computed as follows



$$
begin{align}
D_{KL}(P_1Vert P_2) &= {1over 2}E_{P_1}left[-logdetSigma_1-(x-mu_1)Sigma_1^{-1}(x-mu_1)^{T}+logdetSigma_2+(x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}right]\
&={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig((x-mu_1)Sigma_1^{-1}(x-mu_1)^{T})big)+trbig((x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}big)right]right)\
&={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig(Sigma_1^{-1}(x-mu_1)^{T}(x-mu_1))big)+trbig(Sigma_2^{-1}(x-mu_2)^{T}(x-mu_2)big)right]right)\
&={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(xx^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
&={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(Sigma_1+2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
&={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(Sigma_2^{-1}E_{P_1}left[2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tright]Big)right)\
&={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(mu_1^TSigma_2^{-1}mu_1-2mu_1^TSigma_2^{-1}mu_2+mu_2^TSigma_2^{-1}mu_2Big)right)\
&={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trbig((mu_1-mu_2)^TSigma_2^{-1}(mu_1-mu_2)big)right)\
end{align}
$$

where the second step is obtained because for any scalar $a$, we have $a=tr(a)$. And $trleft(prod_{i=1}^nF_{i}right)=trleft(F_nprod_{i=1}^{n-1}F_iright)$ is applied whenever necessary.



The last equation is equal to the equation in the question when $Sigma$s are diagonal matrices






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2888353%2fhow-to-analytically-compute-kl-divergence-of-two-gaussian-distributions%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    According to http://101.110.118.57/stanford.edu/~jduchi/projects/general_notes.pdf, the KL divergence for two multivariate Gaussians in $mathbb R^n$ is computed as follows



    $$
    begin{align}
    D_{KL}(P_1Vert P_2) &= {1over 2}E_{P_1}left[-logdetSigma_1-(x-mu_1)Sigma_1^{-1}(x-mu_1)^{T}+logdetSigma_2+(x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}right]\
    &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig((x-mu_1)Sigma_1^{-1}(x-mu_1)^{T})big)+trbig((x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}big)right]right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig(Sigma_1^{-1}(x-mu_1)^{T}(x-mu_1))big)+trbig(Sigma_2^{-1}(x-mu_2)^{T}(x-mu_2)big)right]right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(xx^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(Sigma_1+2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(Sigma_2^{-1}E_{P_1}left[2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tright]Big)right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(mu_1^TSigma_2^{-1}mu_1-2mu_1^TSigma_2^{-1}mu_2+mu_2^TSigma_2^{-1}mu_2Big)right)\
    &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trbig((mu_1-mu_2)^TSigma_2^{-1}(mu_1-mu_2)big)right)\
    end{align}
    $$

    where the second step is obtained because for any scalar $a$, we have $a=tr(a)$. And $trleft(prod_{i=1}^nF_{i}right)=trleft(F_nprod_{i=1}^{n-1}F_iright)$ is applied whenever necessary.



    The last equation is equal to the equation in the question when $Sigma$s are diagonal matrices






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      According to http://101.110.118.57/stanford.edu/~jduchi/projects/general_notes.pdf, the KL divergence for two multivariate Gaussians in $mathbb R^n$ is computed as follows



      $$
      begin{align}
      D_{KL}(P_1Vert P_2) &= {1over 2}E_{P_1}left[-logdetSigma_1-(x-mu_1)Sigma_1^{-1}(x-mu_1)^{T}+logdetSigma_2+(x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}right]\
      &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig((x-mu_1)Sigma_1^{-1}(x-mu_1)^{T})big)+trbig((x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}big)right]right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig(Sigma_1^{-1}(x-mu_1)^{T}(x-mu_1))big)+trbig(Sigma_2^{-1}(x-mu_2)^{T}(x-mu_2)big)right]right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(xx^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(Sigma_1+2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(Sigma_2^{-1}E_{P_1}left[2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tright]Big)right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(mu_1^TSigma_2^{-1}mu_1-2mu_1^TSigma_2^{-1}mu_2+mu_2^TSigma_2^{-1}mu_2Big)right)\
      &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trbig((mu_1-mu_2)^TSigma_2^{-1}(mu_1-mu_2)big)right)\
      end{align}
      $$

      where the second step is obtained because for any scalar $a$, we have $a=tr(a)$. And $trleft(prod_{i=1}^nF_{i}right)=trleft(F_nprod_{i=1}^{n-1}F_iright)$ is applied whenever necessary.



      The last equation is equal to the equation in the question when $Sigma$s are diagonal matrices






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        According to http://101.110.118.57/stanford.edu/~jduchi/projects/general_notes.pdf, the KL divergence for two multivariate Gaussians in $mathbb R^n$ is computed as follows



        $$
        begin{align}
        D_{KL}(P_1Vert P_2) &= {1over 2}E_{P_1}left[-logdetSigma_1-(x-mu_1)Sigma_1^{-1}(x-mu_1)^{T}+logdetSigma_2+(x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}right]\
        &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig((x-mu_1)Sigma_1^{-1}(x-mu_1)^{T})big)+trbig((x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig(Sigma_1^{-1}(x-mu_1)^{T}(x-mu_1))big)+trbig(Sigma_2^{-1}(x-mu_2)^{T}(x-mu_2)big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(xx^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(Sigma_1+2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(Sigma_2^{-1}E_{P_1}left[2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tright]Big)right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(mu_1^TSigma_2^{-1}mu_1-2mu_1^TSigma_2^{-1}mu_2+mu_2^TSigma_2^{-1}mu_2Big)right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trbig((mu_1-mu_2)^TSigma_2^{-1}(mu_1-mu_2)big)right)\
        end{align}
        $$

        where the second step is obtained because for any scalar $a$, we have $a=tr(a)$. And $trleft(prod_{i=1}^nF_{i}right)=trleft(F_nprod_{i=1}^{n-1}F_iright)$ is applied whenever necessary.



        The last equation is equal to the equation in the question when $Sigma$s are diagonal matrices






        share|cite|improve this answer











        $endgroup$



        According to http://101.110.118.57/stanford.edu/~jduchi/projects/general_notes.pdf, the KL divergence for two multivariate Gaussians in $mathbb R^n$ is computed as follows



        $$
        begin{align}
        D_{KL}(P_1Vert P_2) &= {1over 2}E_{P_1}left[-logdetSigma_1-(x-mu_1)Sigma_1^{-1}(x-mu_1)^{T}+logdetSigma_2+(x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}right]\
        &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig((x-mu_1)Sigma_1^{-1}(x-mu_1)^{T})big)+trbig((x-mu_2)Sigma_2^{-1}(x-mu_2)^{T}big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}+E_{P_1}left[-trbig(Sigma_1^{-1}(x-mu_1)^{T}(x-mu_1))big)+trbig(Sigma_2^{-1}(x-mu_2)^{T}(x-mu_2)big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(xx^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+E_{P_1}left[trBig(Sigma_2^{-1}big(Sigma_1+2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tbig)Big)right]right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(Sigma_2^{-1}E_{P_1}left[2xmu_1^T-mu_1mu_1^T-2xmu_2^{T}+mu_2mu_2^Tright]Big)right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trBig(mu_1^TSigma_2^{-1}mu_1-2mu_1^TSigma_2^{-1}mu_2+mu_2^TSigma_2^{-1}mu_2Big)right)\
        &={1over 2}left(log{detSigma_2over detSigma_1}-n+tr(Sigma_2^{-1}Sigma_1)+trbig((mu_1-mu_2)^TSigma_2^{-1}(mu_1-mu_2)big)right)\
        end{align}
        $$

        where the second step is obtained because for any scalar $a$, we have $a=tr(a)$. And $trleft(prod_{i=1}^nF_{i}right)=trleft(F_nprod_{i=1}^{n-1}F_iright)$ is applied whenever necessary.



        The last equation is equal to the equation in the question when $Sigma$s are diagonal matrices







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 31 '18 at 0:27

























        answered Dec 31 '18 at 0:19









        Sherwin ChenSherwin Chen

        1657




        1657






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2888353%2fhow-to-analytically-compute-kl-divergence-of-two-gaussian-distributions%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            How do I know what Microsoft account the skydrive app is syncing to?

            When does type information flow backwards in C++?

            Grease: Live!