What is the difference between a kernel, and kernel (Gram) matrix?












0












$begingroup$


Given a kernel, can we represent it as a Gram matrix? For example, a linear kernel can be presented (in Python/MATLAB code) in a Gram matrix as follows: K = X*X.T. If this is true, how to represent other non-trivial kernels in their Gram matrix form, e.g., check the following link, page 5, equation 17 showing Jensen-Shannon kernel: K(p,q) = exp(-JS(p||q))



https://pdfs.semanticscholar.org/3e43/4ca7cbd1869f41e338658f7ab4f954782ad8.pdf










share|cite|improve this question











$endgroup$












  • $begingroup$
    Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 13:31










  • $begingroup$
    You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
    $endgroup$
    – reuns
    Sep 9 '17 at 16:53












  • $begingroup$
    This does not define the define kernel.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 16:55










  • $begingroup$
    @kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
    $endgroup$
    – reuns
    Sep 9 '17 at 16:57












  • $begingroup$
    Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
    $endgroup$
    – Hello World
    Sep 9 '17 at 16:58
















0












$begingroup$


Given a kernel, can we represent it as a Gram matrix? For example, a linear kernel can be presented (in Python/MATLAB code) in a Gram matrix as follows: K = X*X.T. If this is true, how to represent other non-trivial kernels in their Gram matrix form, e.g., check the following link, page 5, equation 17 showing Jensen-Shannon kernel: K(p,q) = exp(-JS(p||q))



https://pdfs.semanticscholar.org/3e43/4ca7cbd1869f41e338658f7ab4f954782ad8.pdf










share|cite|improve this question











$endgroup$












  • $begingroup$
    Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 13:31










  • $begingroup$
    You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
    $endgroup$
    – reuns
    Sep 9 '17 at 16:53












  • $begingroup$
    This does not define the define kernel.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 16:55










  • $begingroup$
    @kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
    $endgroup$
    – reuns
    Sep 9 '17 at 16:57












  • $begingroup$
    Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
    $endgroup$
    – Hello World
    Sep 9 '17 at 16:58














0












0








0





$begingroup$


Given a kernel, can we represent it as a Gram matrix? For example, a linear kernel can be presented (in Python/MATLAB code) in a Gram matrix as follows: K = X*X.T. If this is true, how to represent other non-trivial kernels in their Gram matrix form, e.g., check the following link, page 5, equation 17 showing Jensen-Shannon kernel: K(p,q) = exp(-JS(p||q))



https://pdfs.semanticscholar.org/3e43/4ca7cbd1869f41e338658f7ab4f954782ad8.pdf










share|cite|improve this question











$endgroup$




Given a kernel, can we represent it as a Gram matrix? For example, a linear kernel can be presented (in Python/MATLAB code) in a Gram matrix as follows: K = X*X.T. If this is true, how to represent other non-trivial kernels in their Gram matrix form, e.g., check the following link, page 5, equation 17 showing Jensen-Shannon kernel: K(p,q) = exp(-JS(p||q))



https://pdfs.semanticscholar.org/3e43/4ca7cbd1869f41e338658f7ab4f954782ad8.pdf







matrices machine-learning positive-definite positive-semidefinite python






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Sep 9 '17 at 16:47







Hello World

















asked Sep 9 '17 at 13:24









Hello WorldHello World

64




64












  • $begingroup$
    Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 13:31










  • $begingroup$
    You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
    $endgroup$
    – reuns
    Sep 9 '17 at 16:53












  • $begingroup$
    This does not define the define kernel.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 16:55










  • $begingroup$
    @kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
    $endgroup$
    – reuns
    Sep 9 '17 at 16:57












  • $begingroup$
    Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
    $endgroup$
    – Hello World
    Sep 9 '17 at 16:58


















  • $begingroup$
    Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 13:31










  • $begingroup$
    You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
    $endgroup$
    – reuns
    Sep 9 '17 at 16:53












  • $begingroup$
    This does not define the define kernel.
    $endgroup$
    – kimchi lover
    Sep 9 '17 at 16:55










  • $begingroup$
    @kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
    $endgroup$
    – reuns
    Sep 9 '17 at 16:57












  • $begingroup$
    Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
    $endgroup$
    – Hello World
    Sep 9 '17 at 16:58
















$begingroup$
Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
$endgroup$
– kimchi lover
Sep 9 '17 at 13:31




$begingroup$
Can you briefly define (or point me to a definition) of "kernel", as you use the term? The paper you cite is pretty diffuse, full of examples and not precision.
$endgroup$
– kimchi lover
Sep 9 '17 at 13:31












$begingroup$
You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
$endgroup$
– reuns
Sep 9 '17 at 16:53






$begingroup$
You can look at this for the relation between the kernel trick, a Gram matrix (dot product matrix, for a given dataset) and the inner product in a high-dimensional vector space. @kimchilover
$endgroup$
– reuns
Sep 9 '17 at 16:53














$begingroup$
This does not define the define kernel.
$endgroup$
– kimchi lover
Sep 9 '17 at 16:55




$begingroup$
This does not define the define kernel.
$endgroup$
– kimchi lover
Sep 9 '17 at 16:55












$begingroup$
@kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
$endgroup$
– reuns
Sep 9 '17 at 16:57






$begingroup$
@kimchilover In my linked post I explain why (in machine learning) a kernel is any function $k: mathbb{R}^n times mathbb{R}^n to mathbb{R}$ such that for any $x_i inmathbb{R}^n, i = 1 ldots m$, the matrix $K_{ij} = k(x_i,x_j)$ is positive semi-definite.
$endgroup$
– reuns
Sep 9 '17 at 16:57














$begingroup$
Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
$endgroup$
– Hello World
Sep 9 '17 at 16:58




$begingroup$
Guys, I just want a Python/MATLAB Gram matrix expression for kernel above.
$endgroup$
– Hello World
Sep 9 '17 at 16:58










1 Answer
1






active

oldest

votes


















0












$begingroup$

It seems you are talking about Kernels in the context of machine learning. In which case the difference between the Kernel and the Kernel Gram matrix can be understood via the following expression of Mercer's theorem:



Mercer's theorem in the context of machine learning



Let $X = {x^{(1)}, ... , x^{(m)} }$ be a data set of $m$ points, each of which are $n$ dimensional vectors, i.e. $x^{(i)} in mathbb{R}^n$ then the function $K$ which maps
$$ K(x^{(i)},x^{(j)}) : mathbb{R}^n times mathbb{R}^n rightarrow mathbb{R}$$



is a a valid Kernel if and only if the matrix $G$, called the Kernel matrix, or Gram matrix is symmetric, positive definite.



The matrix $K$ is an $m times m$ matrix where each entry is the kernel of the corresponding data points.



$$G_{i,j} = K(x^{(i)}, x^{(j)})$$



Moreover, note that




A function $K(x,z)$ is a valid kernel if it corresponds to an inner product in some (perhaps infinite dimensional) feature space.




Hence:



For the linear kernel, the Gram matrix is simply the inner product $ G_{i,j} = x^{(i) T} x^{(j)}$. For other kernels, it is the inner product in a feature space with feature map $phi$: i.e. $ G_{i,j} = phi(x^{(i)})^T phi(x^{(j)})$



Sources



Page 18 - https://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf



Page 45
- http://svivek.com/teaching/machine-learning/fall2017/slides/svm/kernels.pdf



page 52 - https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c03_p47-84.pdf






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2422612%2fwhat-is-the-difference-between-a-kernel-and-kernel-gram-matrix%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    It seems you are talking about Kernels in the context of machine learning. In which case the difference between the Kernel and the Kernel Gram matrix can be understood via the following expression of Mercer's theorem:



    Mercer's theorem in the context of machine learning



    Let $X = {x^{(1)}, ... , x^{(m)} }$ be a data set of $m$ points, each of which are $n$ dimensional vectors, i.e. $x^{(i)} in mathbb{R}^n$ then the function $K$ which maps
    $$ K(x^{(i)},x^{(j)}) : mathbb{R}^n times mathbb{R}^n rightarrow mathbb{R}$$



    is a a valid Kernel if and only if the matrix $G$, called the Kernel matrix, or Gram matrix is symmetric, positive definite.



    The matrix $K$ is an $m times m$ matrix where each entry is the kernel of the corresponding data points.



    $$G_{i,j} = K(x^{(i)}, x^{(j)})$$



    Moreover, note that




    A function $K(x,z)$ is a valid kernel if it corresponds to an inner product in some (perhaps infinite dimensional) feature space.




    Hence:



    For the linear kernel, the Gram matrix is simply the inner product $ G_{i,j} = x^{(i) T} x^{(j)}$. For other kernels, it is the inner product in a feature space with feature map $phi$: i.e. $ G_{i,j} = phi(x^{(i)})^T phi(x^{(j)})$



    Sources



    Page 18 - https://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf



    Page 45
    - http://svivek.com/teaching/machine-learning/fall2017/slides/svm/kernels.pdf



    page 52 - https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c03_p47-84.pdf






    share|cite|improve this answer











    $endgroup$


















      0












      $begingroup$

      It seems you are talking about Kernels in the context of machine learning. In which case the difference between the Kernel and the Kernel Gram matrix can be understood via the following expression of Mercer's theorem:



      Mercer's theorem in the context of machine learning



      Let $X = {x^{(1)}, ... , x^{(m)} }$ be a data set of $m$ points, each of which are $n$ dimensional vectors, i.e. $x^{(i)} in mathbb{R}^n$ then the function $K$ which maps
      $$ K(x^{(i)},x^{(j)}) : mathbb{R}^n times mathbb{R}^n rightarrow mathbb{R}$$



      is a a valid Kernel if and only if the matrix $G$, called the Kernel matrix, or Gram matrix is symmetric, positive definite.



      The matrix $K$ is an $m times m$ matrix where each entry is the kernel of the corresponding data points.



      $$G_{i,j} = K(x^{(i)}, x^{(j)})$$



      Moreover, note that




      A function $K(x,z)$ is a valid kernel if it corresponds to an inner product in some (perhaps infinite dimensional) feature space.




      Hence:



      For the linear kernel, the Gram matrix is simply the inner product $ G_{i,j} = x^{(i) T} x^{(j)}$. For other kernels, it is the inner product in a feature space with feature map $phi$: i.e. $ G_{i,j} = phi(x^{(i)})^T phi(x^{(j)})$



      Sources



      Page 18 - https://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf



      Page 45
      - http://svivek.com/teaching/machine-learning/fall2017/slides/svm/kernels.pdf



      page 52 - https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c03_p47-84.pdf






      share|cite|improve this answer











      $endgroup$
















        0












        0








        0





        $begingroup$

        It seems you are talking about Kernels in the context of machine learning. In which case the difference between the Kernel and the Kernel Gram matrix can be understood via the following expression of Mercer's theorem:



        Mercer's theorem in the context of machine learning



        Let $X = {x^{(1)}, ... , x^{(m)} }$ be a data set of $m$ points, each of which are $n$ dimensional vectors, i.e. $x^{(i)} in mathbb{R}^n$ then the function $K$ which maps
        $$ K(x^{(i)},x^{(j)}) : mathbb{R}^n times mathbb{R}^n rightarrow mathbb{R}$$



        is a a valid Kernel if and only if the matrix $G$, called the Kernel matrix, or Gram matrix is symmetric, positive definite.



        The matrix $K$ is an $m times m$ matrix where each entry is the kernel of the corresponding data points.



        $$G_{i,j} = K(x^{(i)}, x^{(j)})$$



        Moreover, note that




        A function $K(x,z)$ is a valid kernel if it corresponds to an inner product in some (perhaps infinite dimensional) feature space.




        Hence:



        For the linear kernel, the Gram matrix is simply the inner product $ G_{i,j} = x^{(i) T} x^{(j)}$. For other kernels, it is the inner product in a feature space with feature map $phi$: i.e. $ G_{i,j} = phi(x^{(i)})^T phi(x^{(j)})$



        Sources



        Page 18 - https://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf



        Page 45
        - http://svivek.com/teaching/machine-learning/fall2017/slides/svm/kernels.pdf



        page 52 - https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c03_p47-84.pdf






        share|cite|improve this answer











        $endgroup$



        It seems you are talking about Kernels in the context of machine learning. In which case the difference between the Kernel and the Kernel Gram matrix can be understood via the following expression of Mercer's theorem:



        Mercer's theorem in the context of machine learning



        Let $X = {x^{(1)}, ... , x^{(m)} }$ be a data set of $m$ points, each of which are $n$ dimensional vectors, i.e. $x^{(i)} in mathbb{R}^n$ then the function $K$ which maps
        $$ K(x^{(i)},x^{(j)}) : mathbb{R}^n times mathbb{R}^n rightarrow mathbb{R}$$



        is a a valid Kernel if and only if the matrix $G$, called the Kernel matrix, or Gram matrix is symmetric, positive definite.



        The matrix $K$ is an $m times m$ matrix where each entry is the kernel of the corresponding data points.



        $$G_{i,j} = K(x^{(i)}, x^{(j)})$$



        Moreover, note that




        A function $K(x,z)$ is a valid kernel if it corresponds to an inner product in some (perhaps infinite dimensional) feature space.




        Hence:



        For the linear kernel, the Gram matrix is simply the inner product $ G_{i,j} = x^{(i) T} x^{(j)}$. For other kernels, it is the inner product in a feature space with feature map $phi$: i.e. $ G_{i,j} = phi(x^{(i)})^T phi(x^{(j)})$



        Sources



        Page 18 - https://see.stanford.edu/materials/aimlcs229/cs229-notes3.pdf



        Page 45
        - http://svivek.com/teaching/machine-learning/fall2017/slides/svm/kernels.pdf



        page 52 - https://people.eecs.berkeley.edu/~jordan/kernels/0521813972c03_p47-84.pdf







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jul 2 '18 at 13:19

























        answered Jul 2 '18 at 10:39









        Xavier Bourret SicotteXavier Bourret Sicotte

        1528




        1528






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2422612%2fwhat-is-the-difference-between-a-kernel-and-kernel-gram-matrix%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Aardman Animations

            Are they similar matrix

            “minimization” problem in Euclidean space related to orthonormal basis