Kernels and reduced row echelon form












0












$begingroup$


In response to this question Kernels and reduced row echelon form - explanation, a very clear method is given for how to read a basis for the kernel of a matrix directly from that matrix in reduced row echelon form. However, there's one step which I don't quite understand the reasoning behind, and I was wondering if anyone could shed some light on the matter.



The user AMD says: "...For each of these vectors, set one of the components that correspond to these pivotless columns to 1 and the rest to 0." I understand what this means and once this step is taken the reasoning for everything else done is completely clear. I just don't understand why we know that this is the right step to take despite extended consideration of the matter.



Thanks in advance!










share|cite|improve this question









$endgroup$

















    0












    $begingroup$


    In response to this question Kernels and reduced row echelon form - explanation, a very clear method is given for how to read a basis for the kernel of a matrix directly from that matrix in reduced row echelon form. However, there's one step which I don't quite understand the reasoning behind, and I was wondering if anyone could shed some light on the matter.



    The user AMD says: "...For each of these vectors, set one of the components that correspond to these pivotless columns to 1 and the rest to 0." I understand what this means and once this step is taken the reasoning for everything else done is completely clear. I just don't understand why we know that this is the right step to take despite extended consideration of the matter.



    Thanks in advance!










    share|cite|improve this question









    $endgroup$















      0












      0








      0





      $begingroup$


      In response to this question Kernels and reduced row echelon form - explanation, a very clear method is given for how to read a basis for the kernel of a matrix directly from that matrix in reduced row echelon form. However, there's one step which I don't quite understand the reasoning behind, and I was wondering if anyone could shed some light on the matter.



      The user AMD says: "...For each of these vectors, set one of the components that correspond to these pivotless columns to 1 and the rest to 0." I understand what this means and once this step is taken the reasoning for everything else done is completely clear. I just don't understand why we know that this is the right step to take despite extended consideration of the matter.



      Thanks in advance!










      share|cite|improve this question









      $endgroup$




      In response to this question Kernels and reduced row echelon form - explanation, a very clear method is given for how to read a basis for the kernel of a matrix directly from that matrix in reduced row echelon form. However, there's one step which I don't quite understand the reasoning behind, and I was wondering if anyone could shed some light on the matter.



      The user AMD says: "...For each of these vectors, set one of the components that correspond to these pivotless columns to 1 and the rest to 0." I understand what this means and once this step is taken the reasoning for everything else done is completely clear. I just don't understand why we know that this is the right step to take despite extended consideration of the matter.



      Thanks in advance!







      linear-algebra matrices






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 13 '18 at 21:53









      George BaosGeorge Baos

      1




      1






















          1 Answer
          1






          active

          oldest

          votes


















          1












          $begingroup$

          Computing the RREF $B=SA$ of the $mtimes n$ coefficient matrix $A$ transforms the system of equations into an equivalent one in which $r=operatorname{rank}(A)$ of the variables $x_i$—those that correspond to pivot columns—appear in a unique equation, so their values are completely determined by the non-pivot variables that appear in the corresponding equation. We therefore choose those non-pivot variables as the free variables of the system. Setting all of the free variables to zero forces the pivot variables to zero as well, so any non-trivial solution of this homogeneous system must have at least one non-zero value among the free variables. We need $m-r$ linearly independent solutions, and there just happen to be exactly that many free variables available, so by setting each of these variables in turn to $1$ and holding the rest at $0$, we can produce $m-r$ elements of $mathbb R^m$ that are guaranteed to be linearly independent. The equations that result from doing this are all of the form $x_i+b=0$, where $x_i$ is one of the pivot variables, which give us the values of the remaining components of the vector.



          Taking the second example from the answer to the linked question, the reduced system is $$begin{align} x_1+2x_3-3x_4 &= 0 \ x_2-x_3+2x_4 &= 0.end{align}$$ The first and second columns have pivots and sure enough $x_1$ and $x_2$ each appear in a unique equation. If we try a solution with $x_3=1$ and $x_4=0$, the equations become $x_1+2=0$ and $x_2-1=0$, and for a solution with $x_3=0$ and $x_4=1$, the equations are $x_1-3=0$ and $x_2+2=0$.



          Another way to look at this is in terms of completing a basis for $mathbb R^m$. The nonzero rows of the RREF are a basis for $A$’s row space. Each of these vectors has a $1$ in a unique position that corresponds to a pivot column and zeros in all of the other pivot positions. If we extend this to a complete basis of $mathbb R^m$, this means that the coefficients of these row space basis vectors in the expression of an arbitrary vector $mathbf v$ in this basis are completely determined. This likely doesn’t produce the required values for the other elements of $mathbf v$, though, but an easy way to adjust those values is to complete the basis with the standard basis vectors $mathbf e_i$ that correspond to non-pivot positions. That is, set one of the non-pivot positions to $1$ and the rest to $0$. We also want all of these additional basis vectors to be orthogonal to all of the row space basis vectors, which we can do by filling in the pivot positions of each additional basis vector with suitable values. This doesn’t affect the linear independence of the set, so the adjusted extra vectors are a basis for the orthogonal complement of the row space, i.e., of the null space.






          share|cite|improve this answer











          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3038616%2fkernels-and-reduced-row-echelon-form%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            1












            $begingroup$

            Computing the RREF $B=SA$ of the $mtimes n$ coefficient matrix $A$ transforms the system of equations into an equivalent one in which $r=operatorname{rank}(A)$ of the variables $x_i$—those that correspond to pivot columns—appear in a unique equation, so their values are completely determined by the non-pivot variables that appear in the corresponding equation. We therefore choose those non-pivot variables as the free variables of the system. Setting all of the free variables to zero forces the pivot variables to zero as well, so any non-trivial solution of this homogeneous system must have at least one non-zero value among the free variables. We need $m-r$ linearly independent solutions, and there just happen to be exactly that many free variables available, so by setting each of these variables in turn to $1$ and holding the rest at $0$, we can produce $m-r$ elements of $mathbb R^m$ that are guaranteed to be linearly independent. The equations that result from doing this are all of the form $x_i+b=0$, where $x_i$ is one of the pivot variables, which give us the values of the remaining components of the vector.



            Taking the second example from the answer to the linked question, the reduced system is $$begin{align} x_1+2x_3-3x_4 &= 0 \ x_2-x_3+2x_4 &= 0.end{align}$$ The first and second columns have pivots and sure enough $x_1$ and $x_2$ each appear in a unique equation. If we try a solution with $x_3=1$ and $x_4=0$, the equations become $x_1+2=0$ and $x_2-1=0$, and for a solution with $x_3=0$ and $x_4=1$, the equations are $x_1-3=0$ and $x_2+2=0$.



            Another way to look at this is in terms of completing a basis for $mathbb R^m$. The nonzero rows of the RREF are a basis for $A$’s row space. Each of these vectors has a $1$ in a unique position that corresponds to a pivot column and zeros in all of the other pivot positions. If we extend this to a complete basis of $mathbb R^m$, this means that the coefficients of these row space basis vectors in the expression of an arbitrary vector $mathbf v$ in this basis are completely determined. This likely doesn’t produce the required values for the other elements of $mathbf v$, though, but an easy way to adjust those values is to complete the basis with the standard basis vectors $mathbf e_i$ that correspond to non-pivot positions. That is, set one of the non-pivot positions to $1$ and the rest to $0$. We also want all of these additional basis vectors to be orthogonal to all of the row space basis vectors, which we can do by filling in the pivot positions of each additional basis vector with suitable values. This doesn’t affect the linear independence of the set, so the adjusted extra vectors are a basis for the orthogonal complement of the row space, i.e., of the null space.






            share|cite|improve this answer











            $endgroup$


















              1












              $begingroup$

              Computing the RREF $B=SA$ of the $mtimes n$ coefficient matrix $A$ transforms the system of equations into an equivalent one in which $r=operatorname{rank}(A)$ of the variables $x_i$—those that correspond to pivot columns—appear in a unique equation, so their values are completely determined by the non-pivot variables that appear in the corresponding equation. We therefore choose those non-pivot variables as the free variables of the system. Setting all of the free variables to zero forces the pivot variables to zero as well, so any non-trivial solution of this homogeneous system must have at least one non-zero value among the free variables. We need $m-r$ linearly independent solutions, and there just happen to be exactly that many free variables available, so by setting each of these variables in turn to $1$ and holding the rest at $0$, we can produce $m-r$ elements of $mathbb R^m$ that are guaranteed to be linearly independent. The equations that result from doing this are all of the form $x_i+b=0$, where $x_i$ is one of the pivot variables, which give us the values of the remaining components of the vector.



              Taking the second example from the answer to the linked question, the reduced system is $$begin{align} x_1+2x_3-3x_4 &= 0 \ x_2-x_3+2x_4 &= 0.end{align}$$ The first and second columns have pivots and sure enough $x_1$ and $x_2$ each appear in a unique equation. If we try a solution with $x_3=1$ and $x_4=0$, the equations become $x_1+2=0$ and $x_2-1=0$, and for a solution with $x_3=0$ and $x_4=1$, the equations are $x_1-3=0$ and $x_2+2=0$.



              Another way to look at this is in terms of completing a basis for $mathbb R^m$. The nonzero rows of the RREF are a basis for $A$’s row space. Each of these vectors has a $1$ in a unique position that corresponds to a pivot column and zeros in all of the other pivot positions. If we extend this to a complete basis of $mathbb R^m$, this means that the coefficients of these row space basis vectors in the expression of an arbitrary vector $mathbf v$ in this basis are completely determined. This likely doesn’t produce the required values for the other elements of $mathbf v$, though, but an easy way to adjust those values is to complete the basis with the standard basis vectors $mathbf e_i$ that correspond to non-pivot positions. That is, set one of the non-pivot positions to $1$ and the rest to $0$. We also want all of these additional basis vectors to be orthogonal to all of the row space basis vectors, which we can do by filling in the pivot positions of each additional basis vector with suitable values. This doesn’t affect the linear independence of the set, so the adjusted extra vectors are a basis for the orthogonal complement of the row space, i.e., of the null space.






              share|cite|improve this answer











              $endgroup$
















                1












                1








                1





                $begingroup$

                Computing the RREF $B=SA$ of the $mtimes n$ coefficient matrix $A$ transforms the system of equations into an equivalent one in which $r=operatorname{rank}(A)$ of the variables $x_i$—those that correspond to pivot columns—appear in a unique equation, so their values are completely determined by the non-pivot variables that appear in the corresponding equation. We therefore choose those non-pivot variables as the free variables of the system. Setting all of the free variables to zero forces the pivot variables to zero as well, so any non-trivial solution of this homogeneous system must have at least one non-zero value among the free variables. We need $m-r$ linearly independent solutions, and there just happen to be exactly that many free variables available, so by setting each of these variables in turn to $1$ and holding the rest at $0$, we can produce $m-r$ elements of $mathbb R^m$ that are guaranteed to be linearly independent. The equations that result from doing this are all of the form $x_i+b=0$, where $x_i$ is one of the pivot variables, which give us the values of the remaining components of the vector.



                Taking the second example from the answer to the linked question, the reduced system is $$begin{align} x_1+2x_3-3x_4 &= 0 \ x_2-x_3+2x_4 &= 0.end{align}$$ The first and second columns have pivots and sure enough $x_1$ and $x_2$ each appear in a unique equation. If we try a solution with $x_3=1$ and $x_4=0$, the equations become $x_1+2=0$ and $x_2-1=0$, and for a solution with $x_3=0$ and $x_4=1$, the equations are $x_1-3=0$ and $x_2+2=0$.



                Another way to look at this is in terms of completing a basis for $mathbb R^m$. The nonzero rows of the RREF are a basis for $A$’s row space. Each of these vectors has a $1$ in a unique position that corresponds to a pivot column and zeros in all of the other pivot positions. If we extend this to a complete basis of $mathbb R^m$, this means that the coefficients of these row space basis vectors in the expression of an arbitrary vector $mathbf v$ in this basis are completely determined. This likely doesn’t produce the required values for the other elements of $mathbf v$, though, but an easy way to adjust those values is to complete the basis with the standard basis vectors $mathbf e_i$ that correspond to non-pivot positions. That is, set one of the non-pivot positions to $1$ and the rest to $0$. We also want all of these additional basis vectors to be orthogonal to all of the row space basis vectors, which we can do by filling in the pivot positions of each additional basis vector with suitable values. This doesn’t affect the linear independence of the set, so the adjusted extra vectors are a basis for the orthogonal complement of the row space, i.e., of the null space.






                share|cite|improve this answer











                $endgroup$



                Computing the RREF $B=SA$ of the $mtimes n$ coefficient matrix $A$ transforms the system of equations into an equivalent one in which $r=operatorname{rank}(A)$ of the variables $x_i$—those that correspond to pivot columns—appear in a unique equation, so their values are completely determined by the non-pivot variables that appear in the corresponding equation. We therefore choose those non-pivot variables as the free variables of the system. Setting all of the free variables to zero forces the pivot variables to zero as well, so any non-trivial solution of this homogeneous system must have at least one non-zero value among the free variables. We need $m-r$ linearly independent solutions, and there just happen to be exactly that many free variables available, so by setting each of these variables in turn to $1$ and holding the rest at $0$, we can produce $m-r$ elements of $mathbb R^m$ that are guaranteed to be linearly independent. The equations that result from doing this are all of the form $x_i+b=0$, where $x_i$ is one of the pivot variables, which give us the values of the remaining components of the vector.



                Taking the second example from the answer to the linked question, the reduced system is $$begin{align} x_1+2x_3-3x_4 &= 0 \ x_2-x_3+2x_4 &= 0.end{align}$$ The first and second columns have pivots and sure enough $x_1$ and $x_2$ each appear in a unique equation. If we try a solution with $x_3=1$ and $x_4=0$, the equations become $x_1+2=0$ and $x_2-1=0$, and for a solution with $x_3=0$ and $x_4=1$, the equations are $x_1-3=0$ and $x_2+2=0$.



                Another way to look at this is in terms of completing a basis for $mathbb R^m$. The nonzero rows of the RREF are a basis for $A$’s row space. Each of these vectors has a $1$ in a unique position that corresponds to a pivot column and zeros in all of the other pivot positions. If we extend this to a complete basis of $mathbb R^m$, this means that the coefficients of these row space basis vectors in the expression of an arbitrary vector $mathbf v$ in this basis are completely determined. This likely doesn’t produce the required values for the other elements of $mathbf v$, though, but an easy way to adjust those values is to complete the basis with the standard basis vectors $mathbf e_i$ that correspond to non-pivot positions. That is, set one of the non-pivot positions to $1$ and the rest to $0$. We also want all of these additional basis vectors to be orthogonal to all of the row space basis vectors, which we can do by filling in the pivot positions of each additional basis vector with suitable values. This doesn’t affect the linear independence of the set, so the adjusted extra vectors are a basis for the orthogonal complement of the row space, i.e., of the null space.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Dec 13 '18 at 23:21

























                answered Dec 13 '18 at 23:04









                amdamd

                30.4k21050




                30.4k21050






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3038616%2fkernels-and-reduced-row-echelon-form%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Aardman Animations

                    Are they similar matrix

                    “minimization” problem in Euclidean space related to orthonormal basis