Is $det(A)=0$ a good indicator to say that a matrix is not invertible?












1












$begingroup$


In finite elements, for example, appears huge sparce (CRS) matrices (matrices with a lot of zeros). It is possible that matlab (or some other program) calculates $det(A)=0$ even though the matrix is invertible?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    If program doesn't use exact arithmetic but rounds off, then yes possible.
    $endgroup$
    – coffeemath
    Dec 5 '18 at 3:10
















1












$begingroup$


In finite elements, for example, appears huge sparce (CRS) matrices (matrices with a lot of zeros). It is possible that matlab (or some other program) calculates $det(A)=0$ even though the matrix is invertible?










share|cite|improve this question









$endgroup$








  • 1




    $begingroup$
    If program doesn't use exact arithmetic but rounds off, then yes possible.
    $endgroup$
    – coffeemath
    Dec 5 '18 at 3:10














1












1








1


0



$begingroup$


In finite elements, for example, appears huge sparce (CRS) matrices (matrices with a lot of zeros). It is possible that matlab (or some other program) calculates $det(A)=0$ even though the matrix is invertible?










share|cite|improve this question









$endgroup$




In finite elements, for example, appears huge sparce (CRS) matrices (matrices with a lot of zeros). It is possible that matlab (or some other program) calculates $det(A)=0$ even though the matrix is invertible?







matlab numerical-linear-algebra sparse-matrices






share|cite|improve this question













share|cite|improve this question











share|cite|improve this question




share|cite|improve this question










asked Dec 5 '18 at 3:06









yeminoyemino

2831314




2831314








  • 1




    $begingroup$
    If program doesn't use exact arithmetic but rounds off, then yes possible.
    $endgroup$
    – coffeemath
    Dec 5 '18 at 3:10














  • 1




    $begingroup$
    If program doesn't use exact arithmetic but rounds off, then yes possible.
    $endgroup$
    – coffeemath
    Dec 5 '18 at 3:10








1




1




$begingroup$
If program doesn't use exact arithmetic but rounds off, then yes possible.
$endgroup$
– coffeemath
Dec 5 '18 at 3:10




$begingroup$
If program doesn't use exact arithmetic but rounds off, then yes possible.
$endgroup$
– coffeemath
Dec 5 '18 at 3:10










3 Answers
3






active

oldest

votes


















4












$begingroup$

The determinant is takes a long time to compute for large matrices. A better way is to look for the smallest singular values of your matrix. If they are 0 or close to machine precision, then it is either not invertible or so poorly conditioned that it probably isn't worth it to invert it. If this is the case, then you can either form a low rank approximation and get an approximate answer or try to reformulate your problem.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
    $endgroup$
    – Algebraic Pavel
    Dec 5 '18 at 13:55










  • $begingroup$
    This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
    $endgroup$
    – whpowell96
    Dec 5 '18 at 16:31










  • $begingroup$
    I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
    $endgroup$
    – Algebraic Pavel
    Dec 5 '18 at 16:37










  • $begingroup$
    I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
    $endgroup$
    – whpowell96
    Dec 5 '18 at 16:41



















2












$begingroup$

Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.



Consider, e.g., $A_n=0.1times I_n$, where $I_n$ is the $ntimes n$ identity matrix. We have $det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.



Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.



On top of that, you know how far is the matrix from the nearest singular matrix. If $kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $delta A$ such that $|delta A|/|A|=1/kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/kappa(A)approxepsilon$, where $epsilon$ is the machine precision (e.g., $approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.






share|cite|improve this answer









$endgroup$





















    1












    $begingroup$

    Absolutely.



    There’s always round off error, and the numerical stability of algorithms for calculating determinates can be highly unstable.



    There are numberical techniques to find inverses for sparse matrices - I don’t know any, but Google will.






    share|cite|improve this answer









    $endgroup$













      Your Answer





      StackExchange.ifUsing("editor", function () {
      return StackExchange.using("mathjaxEditing", function () {
      StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
      StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
      });
      });
      }, "mathjax-editing");

      StackExchange.ready(function() {
      var channelOptions = {
      tags: "".split(" "),
      id: "69"
      };
      initTagRenderer("".split(" "), "".split(" "), channelOptions);

      StackExchange.using("externalEditor", function() {
      // Have to fire editor after snippets, if snippets enabled
      if (StackExchange.settings.snippets.snippetsEnabled) {
      StackExchange.using("snippets", function() {
      createEditor();
      });
      }
      else {
      createEditor();
      }
      });

      function createEditor() {
      StackExchange.prepareEditor({
      heartbeatType: 'answer',
      autoActivateHeartbeat: false,
      convertImagesToLinks: true,
      noModals: true,
      showLowRepImageUploadWarning: true,
      reputationToPostImages: 10,
      bindNavPrevention: true,
      postfix: "",
      imageUploader: {
      brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
      contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
      allowUrls: true
      },
      noCode: true, onDemand: true,
      discardSelector: ".discard-answer"
      ,immediatelyShowMarkdownHelp:true
      });


      }
      });














      draft saved

      draft discarded


















      StackExchange.ready(
      function () {
      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026547%2fis-deta-0-a-good-indicator-to-say-that-a-matrix-is-not-invertible%23new-answer', 'question_page');
      }
      );

      Post as a guest















      Required, but never shown

























      3 Answers
      3






      active

      oldest

      votes








      3 Answers
      3






      active

      oldest

      votes









      active

      oldest

      votes






      active

      oldest

      votes









      4












      $begingroup$

      The determinant is takes a long time to compute for large matrices. A better way is to look for the smallest singular values of your matrix. If they are 0 or close to machine precision, then it is either not invertible or so poorly conditioned that it probably isn't worth it to invert it. If this is the case, then you can either form a low rank approximation and get an approximate answer or try to reformulate your problem.






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 13:55










      • $begingroup$
        This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:31










      • $begingroup$
        I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 16:37










      • $begingroup$
        I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:41
















      4












      $begingroup$

      The determinant is takes a long time to compute for large matrices. A better way is to look for the smallest singular values of your matrix. If they are 0 or close to machine precision, then it is either not invertible or so poorly conditioned that it probably isn't worth it to invert it. If this is the case, then you can either form a low rank approximation and get an approximate answer or try to reformulate your problem.






      share|cite|improve this answer









      $endgroup$













      • $begingroup$
        The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 13:55










      • $begingroup$
        This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:31










      • $begingroup$
        I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 16:37










      • $begingroup$
        I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:41














      4












      4








      4





      $begingroup$

      The determinant is takes a long time to compute for large matrices. A better way is to look for the smallest singular values of your matrix. If they are 0 or close to machine precision, then it is either not invertible or so poorly conditioned that it probably isn't worth it to invert it. If this is the case, then you can either form a low rank approximation and get an approximate answer or try to reformulate your problem.






      share|cite|improve this answer









      $endgroup$



      The determinant is takes a long time to compute for large matrices. A better way is to look for the smallest singular values of your matrix. If they are 0 or close to machine precision, then it is either not invertible or so poorly conditioned that it probably isn't worth it to invert it. If this is the case, then you can either form a low rank approximation and get an approximate answer or try to reformulate your problem.







      share|cite|improve this answer












      share|cite|improve this answer



      share|cite|improve this answer










      answered Dec 5 '18 at 3:12









      whpowell96whpowell96

      56615




      56615












      • $begingroup$
        The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 13:55










      • $begingroup$
        This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:31










      • $begingroup$
        I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 16:37










      • $begingroup$
        I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:41


















      • $begingroup$
        The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 13:55










      • $begingroup$
        This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:31










      • $begingroup$
        I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
        $endgroup$
        – Algebraic Pavel
        Dec 5 '18 at 16:37










      • $begingroup$
        I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
        $endgroup$
        – whpowell96
        Dec 5 '18 at 16:41
















      $begingroup$
      The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
      $endgroup$
      – Algebraic Pavel
      Dec 5 '18 at 13:55




      $begingroup$
      The smallest singular value is not enough, it needs to be related to the largest one. Their ratio (or the reciprocal value) of the spectral condition number give you relative distance to the closest singular matrix (measured in the same norm).
      $endgroup$
      – Algebraic Pavel
      Dec 5 '18 at 13:55












      $begingroup$
      This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
      $endgroup$
      – whpowell96
      Dec 5 '18 at 16:31




      $begingroup$
      This is true, but in practice your largest singular value is typically much larger than machine precision so singular values on the order of $10^{-16}$ will almost always lead to conditioning problems
      $endgroup$
      – whpowell96
      Dec 5 '18 at 16:31












      $begingroup$
      I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
      $endgroup$
      – Algebraic Pavel
      Dec 5 '18 at 16:37




      $begingroup$
      I'm not sure you got my point. You need to consider the value of $sigma_min$ relative to $sigma_max$, not its absolute value. Consider $A=10^{-16}I$. The minimal singular value of $A$ is $10^{-16}$ but its perfectly well conditioned.
      $endgroup$
      – Algebraic Pavel
      Dec 5 '18 at 16:37












      $begingroup$
      I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
      $endgroup$
      – whpowell96
      Dec 5 '18 at 16:41




      $begingroup$
      I am aware of the definition of the 2-conditoin number. I am saying that in practice, your largest singular value will never be that low because if it is, you are probably losing precision due to the problem being poorly scaled or something.
      $endgroup$
      – whpowell96
      Dec 5 '18 at 16:41











      2












      $begingroup$

      Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.



      Consider, e.g., $A_n=0.1times I_n$, where $I_n$ is the $ntimes n$ identity matrix. We have $det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.



      Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.



      On top of that, you know how far is the matrix from the nearest singular matrix. If $kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $delta A$ such that $|delta A|/|A|=1/kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/kappa(A)approxepsilon$, where $epsilon$ is the machine precision (e.g., $approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.






      share|cite|improve this answer









      $endgroup$


















        2












        $begingroup$

        Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.



        Consider, e.g., $A_n=0.1times I_n$, where $I_n$ is the $ntimes n$ identity matrix. We have $det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.



        Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.



        On top of that, you know how far is the matrix from the nearest singular matrix. If $kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $delta A$ such that $|delta A|/|A|=1/kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/kappa(A)approxepsilon$, where $epsilon$ is the machine precision (e.g., $approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.






        share|cite|improve this answer









        $endgroup$
















          2












          2








          2





          $begingroup$

          Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.



          Consider, e.g., $A_n=0.1times I_n$, where $I_n$ is the $ntimes n$ identity matrix. We have $det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.



          Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.



          On top of that, you know how far is the matrix from the nearest singular matrix. If $kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $delta A$ such that $|delta A|/|A|=1/kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/kappa(A)approxepsilon$, where $epsilon$ is the machine precision (e.g., $approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.






          share|cite|improve this answer









          $endgroup$



          Computing determinant of a matrix is quite sensitive to round-off. On top of that, it is easy to obtain a zero or infinite determinant as output of computational procedures due to floating precision underflow or overflow.



          Consider, e.g., $A_n=0.1times I_n$, where $I_n$ is the $ntimes n$ identity matrix. We have $det(A_n)=10^{-n}$. If $n$ is large enough (324 for double precision), standard techniques to compute the determinant will report you zero although the matrix $A_n$ itself is perfectly conditioned and invertible.



          Conditioning of the matrix is a better measure of "(non)singularity" in numerical computations. It gives you information on what is the sensitivity of the matrix "inversion". This is the usual definition of the condition number. Higher the condition number, more sensitive the solution of $Ax=b$ to the perturbations of the input and to round-off.



          On top of that, you know how far is the matrix from the nearest singular matrix. If $kappa(A)$ is the condition number of a nonsingular $A$ in some suitable norm (usually one of the three popular $p$-norms), we know that there is a $delta A$ such that $|delta A|/|A|=1/kappa(A)$ is singular. Higher the condition number, closer we are to a singular matrix. Eventually, if $1/kappa(A)approxepsilon$, where $epsilon$ is the machine precision (e.g., $approx 10^{-16}$ for the double precision floating point arithmetic), the matrix is considered numerically singular.







          share|cite|improve this answer












          share|cite|improve this answer



          share|cite|improve this answer










          answered Dec 5 '18 at 17:10









          Algebraic PavelAlgebraic Pavel

          16.3k31840




          16.3k31840























              1












              $begingroup$

              Absolutely.



              There’s always round off error, and the numerical stability of algorithms for calculating determinates can be highly unstable.



              There are numberical techniques to find inverses for sparse matrices - I don’t know any, but Google will.






              share|cite|improve this answer









              $endgroup$


















                1












                $begingroup$

                Absolutely.



                There’s always round off error, and the numerical stability of algorithms for calculating determinates can be highly unstable.



                There are numberical techniques to find inverses for sparse matrices - I don’t know any, but Google will.






                share|cite|improve this answer









                $endgroup$
















                  1












                  1








                  1





                  $begingroup$

                  Absolutely.



                  There’s always round off error, and the numerical stability of algorithms for calculating determinates can be highly unstable.



                  There are numberical techniques to find inverses for sparse matrices - I don’t know any, but Google will.






                  share|cite|improve this answer









                  $endgroup$



                  Absolutely.



                  There’s always round off error, and the numerical stability of algorithms for calculating determinates can be highly unstable.



                  There are numberical techniques to find inverses for sparse matrices - I don’t know any, but Google will.







                  share|cite|improve this answer












                  share|cite|improve this answer



                  share|cite|improve this answer










                  answered Dec 5 '18 at 3:40









                  user458276user458276

                  31629




                  31629






























                      draft saved

                      draft discarded




















































                      Thanks for contributing an answer to Mathematics Stack Exchange!


                      • Please be sure to answer the question. Provide details and share your research!

                      But avoid



                      • Asking for help, clarification, or responding to other answers.

                      • Making statements based on opinion; back them up with references or personal experience.


                      Use MathJax to format equations. MathJax reference.


                      To learn more, see our tips on writing great answers.




                      draft saved


                      draft discarded














                      StackExchange.ready(
                      function () {
                      StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3026547%2fis-deta-0-a-good-indicator-to-say-that-a-matrix-is-not-invertible%23new-answer', 'question_page');
                      }
                      );

                      Post as a guest















                      Required, but never shown





















































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown

































                      Required, but never shown














                      Required, but never shown












                      Required, but never shown







                      Required, but never shown







                      Popular posts from this blog

                      How do I know what Microsoft account the skydrive app is syncing to?

                      Grease: Live!

                      When does type information flow backwards in C++?