machine learning octave code gradient descent question












0












$begingroup$


I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.



this is the octave code to find the delta for gradient descent.



     theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided


First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow



     theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key


but i'm not sure why the answer key only show



            theta = theta - alpha / m * ((X * theta - y)'* X)';


this equation.



Second question) what is the ' ' doing in octave code?



            theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here









share|cite|improve this question









$endgroup$












  • $begingroup$
    in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
    $endgroup$
    – reuns
    May 25 '16 at 10:40












  • $begingroup$
    In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
    $endgroup$
    – zuggg
    May 25 '16 at 11:36










  • $begingroup$
    oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
    $endgroup$
    – james Miler
    May 26 '16 at 0:28
















0












$begingroup$


I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.



this is the octave code to find the delta for gradient descent.



     theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided


First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow



     theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key


but i'm not sure why the answer key only show



            theta = theta - alpha / m * ((X * theta - y)'* X)';


this equation.



Second question) what is the ' ' doing in octave code?



            theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here









share|cite|improve this question









$endgroup$












  • $begingroup$
    in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
    $endgroup$
    – reuns
    May 25 '16 at 10:40












  • $begingroup$
    In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
    $endgroup$
    – zuggg
    May 25 '16 at 11:36










  • $begingroup$
    oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
    $endgroup$
    – james Miler
    May 26 '16 at 0:28














0












0








0





$begingroup$


I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.



this is the octave code to find the delta for gradient descent.



     theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided


First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow



     theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key


but i'm not sure why the answer key only show



            theta = theta - alpha / m * ((X * theta - y)'* X)';


this equation.



Second question) what is the ' ' doing in octave code?



            theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here









share|cite|improve this question









$endgroup$




I'm taking Coursera Machine learning course. so who take this courses will able to help this problem.



this is the octave code to find the delta for gradient descent.



     theta = theta - alpha / m * ((X * theta - y)'* X)';//this is the answerkey provided


First question)
the way i know to solve the gradient descent theta(0) and theta(1) should have different approach to get value as follow



     theta(0) = theta(0) - alpha / m * ((X * theta(0) - y)')'; //my answer key
theta(1) = theta(1) - alpha / m * ((X * theta(1) - y)')'; //my answer key


but i'm not sure why the answer key only show



            theta = theta - alpha / m * ((X * theta - y)'* X)';


this equation.



Second question) what is the ' ' doing in octave code?



            theta = theta - alpha / m * ((X * theta - y)'* X)';
'* X)' // what ' ' thing do in here






machine-learning octave






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked May 25 '16 at 10:34









james Milerjames Miler

159412




159412












  • $begingroup$
    in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
    $endgroup$
    – reuns
    May 25 '16 at 10:40












  • $begingroup$
    In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
    $endgroup$
    – zuggg
    May 25 '16 at 11:36










  • $begingroup$
    oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
    $endgroup$
    – james Miler
    May 26 '16 at 0:28


















  • $begingroup$
    in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
    $endgroup$
    – reuns
    May 25 '16 at 10:40












  • $begingroup$
    In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
    $endgroup$
    – zuggg
    May 25 '16 at 11:36










  • $begingroup$
    oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
    $endgroup$
    – james Miler
    May 26 '16 at 0:28
















$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40






$begingroup$
in octave/matlab, $x = u+v*w$ can be : $x,u,w$ : 3 vectors, $v$ : a matrix with $v*w$ the multiplication of a matrix with a vector. the main idea of matlab is that the basic datatypes instead of being integers and floating point numbers, are arrays / matrices of numbers.
$endgroup$
– reuns
May 25 '16 at 10:40














$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36




$begingroup$
In Octave, $X'$ corresponds to the transpose of the matrix (or the vector) $X$.
$endgroup$
– zuggg
May 25 '16 at 11:36












$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28




$begingroup$
oh ok so X' means transpose of X. is there someone who knows gradient descent ? I do not understand why they used transpose to find theta here
$endgroup$
– james Miler
May 26 '16 at 0:28










1 Answer
1






active

oldest

votes


















0












$begingroup$

Transpose here is used for matching the columns of the X with rows of theta.
Ex:
size of
X=97x2;
y=97x1;
theta=2x1;



first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
(97x1) * (97x2)



Thus transposing the first matrix makes multiplication possible.
This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.






share|cite|improve this answer









$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1799305%2fmachine-learning-octave-code-gradient-descent-question%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    0












    $begingroup$

    Transpose here is used for matching the columns of the X with rows of theta.
    Ex:
    size of
    X=97x2;
    y=97x1;
    theta=2x1;



    first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
    (97x1) * (97x2)



    Thus transposing the first matrix makes multiplication possible.
    This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.






    share|cite|improve this answer









    $endgroup$


















      0












      $begingroup$

      Transpose here is used for matching the columns of the X with rows of theta.
      Ex:
      size of
      X=97x2;
      y=97x1;
      theta=2x1;



      first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
      (97x1) * (97x2)



      Thus transposing the first matrix makes multiplication possible.
      This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.






      share|cite|improve this answer









      $endgroup$
















        0












        0








        0





        $begingroup$

        Transpose here is used for matching the columns of the X with rows of theta.
        Ex:
        size of
        X=97x2;
        y=97x1;
        theta=2x1;



        first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
        (97x1) * (97x2)



        Thus transposing the first matrix makes multiplication possible.
        This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.






        share|cite|improve this answer









        $endgroup$



        Transpose here is used for matching the columns of the X with rows of theta.
        Ex:
        size of
        X=97x2;
        y=97x1;
        theta=2x1;



        first calc is X * theta. The size of the resulting matrix will be 97x1. Then, the sub of two same size matrices. Now, we have to multiply X with the matrix obtained from the previous step. But, the sizes are different
        (97x1) * (97x2)



        Thus transposing the first matrix makes multiplication possible.
        This results in a new matrix of size 1x2 (row vector). But, theta is of size 2x1 (column vector). Hence the final transpose.







        share|cite|improve this answer












        share|cite|improve this answer



        share|cite|improve this answer










        answered Mar 3 '17 at 20:28









        Bharani K DharanBharani K Dharan

        1




        1






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f1799305%2fmachine-learning-octave-code-gradient-descent-question%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Probability when a professor distributes a quiz and homework assignment to a class of n students.

            Aardman Animations

            Are they similar matrix