If $A$ is invertible and $A^n$ is diagonalizable, then $A$ is diagonalizable.












4












$begingroup$


I'm attempting to understand a proof presented on this Wikipedia page: https://en.wikipedia.org/wiki/Diagonalizable_matrix



The claim is as follows:




Let $A$ be a matrix over $F$. If $A$ is diagonalizable, then so is any power of it. Conversely, if $A$ is invertible, $F$ is algebraically closed, and $A^n$ is diagonalizable for some $n$ that is not an integer multiple of the characteristic of $F$, then $A$ is diagonalizable.




This is the provided proof:



If $A^n$ is diagonalizable, then $A$ is annihilated by some polynomial ${displaystyle left(x^{n}-lambda _{1}right)cdots left(x^{n}-lambda _{k}right)}$, which has no multiple root (since ${displaystyle lambda _{j}neq 0})$ and is divided by the minimal polynomial of $A$.





Here are my questions:



(a) Why is it clear that $A$ is annihilated by that polynomial? $A$ certainly is annihilated by $(x-lambda_1)cdots(x-lambda_k)$ and $A^n$ is annihilated by $(x-lambda_1^n)cdots(x-lambda_k^n)$ (as powers of matrices have powers of eigenvalues as their eigenvalues), but I don't see the connection to Wikipedia's claim.



(b) What tells us that there is no multiple root? It's clear to me that $lambda_jneq0$ ($A$ is invertible implies $A^n$ is invertible implies $A^n$ does not have $0$ as an eigenvalue), but why can't two of the $lambda_j$'s be equal? What says that we have distinct eigenvalues?



(c) Any polynomial that annihilates a matrix certainly is a multiple of the minimal polynomial...but why does this tell us that $A$ is diagonalizable?



(d) Earlier in the article, the following is claimed: A matrix or linear map is diagonalizable over the field $F$ if and only if its minimal polynomial is a product of distinct linear factors over $F$. If questions (a) and (b) are resolved, I can see how this would imply (c), but why is this claim true?





Here is another approach to this problem, but this one seems to be more complicated than what is presented on Wikipedia. Positive power of an invertible matrix with complex entries is diagonalizable only if the matrix itself is diagonalizable.










share|cite|improve this question









$endgroup$

















    4












    $begingroup$


    I'm attempting to understand a proof presented on this Wikipedia page: https://en.wikipedia.org/wiki/Diagonalizable_matrix



    The claim is as follows:




    Let $A$ be a matrix over $F$. If $A$ is diagonalizable, then so is any power of it. Conversely, if $A$ is invertible, $F$ is algebraically closed, and $A^n$ is diagonalizable for some $n$ that is not an integer multiple of the characteristic of $F$, then $A$ is diagonalizable.




    This is the provided proof:



    If $A^n$ is diagonalizable, then $A$ is annihilated by some polynomial ${displaystyle left(x^{n}-lambda _{1}right)cdots left(x^{n}-lambda _{k}right)}$, which has no multiple root (since ${displaystyle lambda _{j}neq 0})$ and is divided by the minimal polynomial of $A$.





    Here are my questions:



    (a) Why is it clear that $A$ is annihilated by that polynomial? $A$ certainly is annihilated by $(x-lambda_1)cdots(x-lambda_k)$ and $A^n$ is annihilated by $(x-lambda_1^n)cdots(x-lambda_k^n)$ (as powers of matrices have powers of eigenvalues as their eigenvalues), but I don't see the connection to Wikipedia's claim.



    (b) What tells us that there is no multiple root? It's clear to me that $lambda_jneq0$ ($A$ is invertible implies $A^n$ is invertible implies $A^n$ does not have $0$ as an eigenvalue), but why can't two of the $lambda_j$'s be equal? What says that we have distinct eigenvalues?



    (c) Any polynomial that annihilates a matrix certainly is a multiple of the minimal polynomial...but why does this tell us that $A$ is diagonalizable?



    (d) Earlier in the article, the following is claimed: A matrix or linear map is diagonalizable over the field $F$ if and only if its minimal polynomial is a product of distinct linear factors over $F$. If questions (a) and (b) are resolved, I can see how this would imply (c), but why is this claim true?





    Here is another approach to this problem, but this one seems to be more complicated than what is presented on Wikipedia. Positive power of an invertible matrix with complex entries is diagonalizable only if the matrix itself is diagonalizable.










    share|cite|improve this question









    $endgroup$















      4












      4








      4


      1



      $begingroup$


      I'm attempting to understand a proof presented on this Wikipedia page: https://en.wikipedia.org/wiki/Diagonalizable_matrix



      The claim is as follows:




      Let $A$ be a matrix over $F$. If $A$ is diagonalizable, then so is any power of it. Conversely, if $A$ is invertible, $F$ is algebraically closed, and $A^n$ is diagonalizable for some $n$ that is not an integer multiple of the characteristic of $F$, then $A$ is diagonalizable.




      This is the provided proof:



      If $A^n$ is diagonalizable, then $A$ is annihilated by some polynomial ${displaystyle left(x^{n}-lambda _{1}right)cdots left(x^{n}-lambda _{k}right)}$, which has no multiple root (since ${displaystyle lambda _{j}neq 0})$ and is divided by the minimal polynomial of $A$.





      Here are my questions:



      (a) Why is it clear that $A$ is annihilated by that polynomial? $A$ certainly is annihilated by $(x-lambda_1)cdots(x-lambda_k)$ and $A^n$ is annihilated by $(x-lambda_1^n)cdots(x-lambda_k^n)$ (as powers of matrices have powers of eigenvalues as their eigenvalues), but I don't see the connection to Wikipedia's claim.



      (b) What tells us that there is no multiple root? It's clear to me that $lambda_jneq0$ ($A$ is invertible implies $A^n$ is invertible implies $A^n$ does not have $0$ as an eigenvalue), but why can't two of the $lambda_j$'s be equal? What says that we have distinct eigenvalues?



      (c) Any polynomial that annihilates a matrix certainly is a multiple of the minimal polynomial...but why does this tell us that $A$ is diagonalizable?



      (d) Earlier in the article, the following is claimed: A matrix or linear map is diagonalizable over the field $F$ if and only if its minimal polynomial is a product of distinct linear factors over $F$. If questions (a) and (b) are resolved, I can see how this would imply (c), but why is this claim true?





      Here is another approach to this problem, but this one seems to be more complicated than what is presented on Wikipedia. Positive power of an invertible matrix with complex entries is diagonalizable only if the matrix itself is diagonalizable.










      share|cite|improve this question









      $endgroup$




      I'm attempting to understand a proof presented on this Wikipedia page: https://en.wikipedia.org/wiki/Diagonalizable_matrix



      The claim is as follows:




      Let $A$ be a matrix over $F$. If $A$ is diagonalizable, then so is any power of it. Conversely, if $A$ is invertible, $F$ is algebraically closed, and $A^n$ is diagonalizable for some $n$ that is not an integer multiple of the characteristic of $F$, then $A$ is diagonalizable.




      This is the provided proof:



      If $A^n$ is diagonalizable, then $A$ is annihilated by some polynomial ${displaystyle left(x^{n}-lambda _{1}right)cdots left(x^{n}-lambda _{k}right)}$, which has no multiple root (since ${displaystyle lambda _{j}neq 0})$ and is divided by the minimal polynomial of $A$.





      Here are my questions:



      (a) Why is it clear that $A$ is annihilated by that polynomial? $A$ certainly is annihilated by $(x-lambda_1)cdots(x-lambda_k)$ and $A^n$ is annihilated by $(x-lambda_1^n)cdots(x-lambda_k^n)$ (as powers of matrices have powers of eigenvalues as their eigenvalues), but I don't see the connection to Wikipedia's claim.



      (b) What tells us that there is no multiple root? It's clear to me that $lambda_jneq0$ ($A$ is invertible implies $A^n$ is invertible implies $A^n$ does not have $0$ as an eigenvalue), but why can't two of the $lambda_j$'s be equal? What says that we have distinct eigenvalues?



      (c) Any polynomial that annihilates a matrix certainly is a multiple of the minimal polynomial...but why does this tell us that $A$ is diagonalizable?



      (d) Earlier in the article, the following is claimed: A matrix or linear map is diagonalizable over the field $F$ if and only if its minimal polynomial is a product of distinct linear factors over $F$. If questions (a) and (b) are resolved, I can see how this would imply (c), but why is this claim true?





      Here is another approach to this problem, but this one seems to be more complicated than what is presented on Wikipedia. Positive power of an invertible matrix with complex entries is diagonalizable only if the matrix itself is diagonalizable.







      linear-algebra proof-explanation diagonalization






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Dec 5 '18 at 21:17









      AtsinaAtsina

      791116




      791116






















          2 Answers
          2






          active

          oldest

          votes


















          0












          $begingroup$

          https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)



          quote: An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1.



          Some quick justifications here: https://www.maths.ed.ac.uk/~tl/minimal.pdf





          So since we supposed $A^n$ diagonalisable, it has such a minimal polynomial $mu(x)=prod(x-lambda_i)$ with distinct $lambda_i$ that satisfies $mu(A^n)=0$



          But $mu(A^n)=0$ also means that $M(A)=0$ where $M(x)=prod(x^n-lambda_i)$ or equivalently $M(x)=mu(x^n)$.





          Now if the $lambda_i=r_ie^{itheta_i}$ are distinct then so are their k-roots.



          Indeed $(lambda_1)^{1/n}=(lambda_2)^{1/n}impliesbegin{cases} r_1=r_2\ frac{theta_1+2n_1pi}n=frac{theta_2+2n_2pi}n+2mpiend{cases}impliesbegin{cases} r_1=r_2\ theta_1=theta_2pmod{2pi}end{cases}implies lambda_1=lambda_2$



          (We get the result by contraposition).



          Considering now the minimal polynomial of $A$, it has to divide $M(x)$, but we have just seen that because of the hypothesis made on $F$, that $M$ is completely split, and all the linear factors are distinct.



          Coming back to the quote at the beginning, it means that $A$ itself should be diagonalisable.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
            $endgroup$
            – Atsina
            Dec 5 '18 at 22:56












          • $begingroup$
            Nevermind, I see you proved the contrapositive.
            $endgroup$
            – Atsina
            Dec 5 '18 at 23:00










          • $begingroup$
            yep, I edited. ;
            $endgroup$
            – zwim
            Dec 6 '18 at 0:07










          • $begingroup$
            I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
            $endgroup$
            – Atsina
            Dec 6 '18 at 0:41






          • 2




            $begingroup$
            The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
            $endgroup$
            – zwim
            Dec 6 '18 at 7:37



















          2












          $begingroup$

          That passage was written by a user with a Japanese user name "TakuyaMurata". I suppose what he meant was actually this: suppose $F$ is an algebraically closed field and $n>0$ is not an integer multiple of $operatorname{char}(F)$. If $A$ is invertible and $A^n$ is diagonalisable over $F$, then:





          1. $f(x)=(x^n-lambda_1)cdots(x^n-lambda_k)$ annihilates $A$, where $lambda_1,ldots,lambda_k$ are the distinct eigenvalues of $A^n$.

          2. Since the $lambda_i$s are distinct, $x^n-lambda_i$ and $x^n-lambda_j$ have not any common root when $ine j$. As $g(x):=x^n-lambda_i$ and $g'(x)=nx^{n-1}$ also have no common roots (because $lambda_ine0$ and $nne0$ --- we have used the assumptions that $A$ is invertible and $n$ is not an integer multiple of $operatorname{char} F$ here), we see that $f$ has not any multiple roots.

          3. As the minimal polynomial $m(x)$ of $A$ divides $f(x)$, $m$ also has not any multiple roots.

          4. Therefore, $m$ is a product of distinct linear factors (note that $m$ splits because $F$ by assumption is algebraically closed). Hence $A$ is diagonalisable over $F$.






          share|cite|improve this answer









          $endgroup$













            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3027664%2fif-a-is-invertible-and-an-is-diagonalizable-then-a-is-diagonalizable%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0












            $begingroup$

            https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)



            quote: An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1.



            Some quick justifications here: https://www.maths.ed.ac.uk/~tl/minimal.pdf





            So since we supposed $A^n$ diagonalisable, it has such a minimal polynomial $mu(x)=prod(x-lambda_i)$ with distinct $lambda_i$ that satisfies $mu(A^n)=0$



            But $mu(A^n)=0$ also means that $M(A)=0$ where $M(x)=prod(x^n-lambda_i)$ or equivalently $M(x)=mu(x^n)$.





            Now if the $lambda_i=r_ie^{itheta_i}$ are distinct then so are their k-roots.



            Indeed $(lambda_1)^{1/n}=(lambda_2)^{1/n}impliesbegin{cases} r_1=r_2\ frac{theta_1+2n_1pi}n=frac{theta_2+2n_2pi}n+2mpiend{cases}impliesbegin{cases} r_1=r_2\ theta_1=theta_2pmod{2pi}end{cases}implies lambda_1=lambda_2$



            (We get the result by contraposition).



            Considering now the minimal polynomial of $A$, it has to divide $M(x)$, but we have just seen that because of the hypothesis made on $F$, that $M$ is completely split, and all the linear factors are distinct.



            Coming back to the quote at the beginning, it means that $A$ itself should be diagonalisable.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
              $endgroup$
              – Atsina
              Dec 5 '18 at 22:56












            • $begingroup$
              Nevermind, I see you proved the contrapositive.
              $endgroup$
              – Atsina
              Dec 5 '18 at 23:00










            • $begingroup$
              yep, I edited. ;
              $endgroup$
              – zwim
              Dec 6 '18 at 0:07










            • $begingroup$
              I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
              $endgroup$
              – Atsina
              Dec 6 '18 at 0:41






            • 2




              $begingroup$
              The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
              $endgroup$
              – zwim
              Dec 6 '18 at 7:37
















            0












            $begingroup$

            https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)



            quote: An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1.



            Some quick justifications here: https://www.maths.ed.ac.uk/~tl/minimal.pdf





            So since we supposed $A^n$ diagonalisable, it has such a minimal polynomial $mu(x)=prod(x-lambda_i)$ with distinct $lambda_i$ that satisfies $mu(A^n)=0$



            But $mu(A^n)=0$ also means that $M(A)=0$ where $M(x)=prod(x^n-lambda_i)$ or equivalently $M(x)=mu(x^n)$.





            Now if the $lambda_i=r_ie^{itheta_i}$ are distinct then so are their k-roots.



            Indeed $(lambda_1)^{1/n}=(lambda_2)^{1/n}impliesbegin{cases} r_1=r_2\ frac{theta_1+2n_1pi}n=frac{theta_2+2n_2pi}n+2mpiend{cases}impliesbegin{cases} r_1=r_2\ theta_1=theta_2pmod{2pi}end{cases}implies lambda_1=lambda_2$



            (We get the result by contraposition).



            Considering now the minimal polynomial of $A$, it has to divide $M(x)$, but we have just seen that because of the hypothesis made on $F$, that $M$ is completely split, and all the linear factors are distinct.



            Coming back to the quote at the beginning, it means that $A$ itself should be diagonalisable.






            share|cite|improve this answer











            $endgroup$













            • $begingroup$
              It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
              $endgroup$
              – Atsina
              Dec 5 '18 at 22:56












            • $begingroup$
              Nevermind, I see you proved the contrapositive.
              $endgroup$
              – Atsina
              Dec 5 '18 at 23:00










            • $begingroup$
              yep, I edited. ;
              $endgroup$
              – zwim
              Dec 6 '18 at 0:07










            • $begingroup$
              I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
              $endgroup$
              – Atsina
              Dec 6 '18 at 0:41






            • 2




              $begingroup$
              The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
              $endgroup$
              – zwim
              Dec 6 '18 at 7:37














            0












            0








            0





            $begingroup$

            https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)



            quote: An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1.



            Some quick justifications here: https://www.maths.ed.ac.uk/~tl/minimal.pdf





            So since we supposed $A^n$ diagonalisable, it has such a minimal polynomial $mu(x)=prod(x-lambda_i)$ with distinct $lambda_i$ that satisfies $mu(A^n)=0$



            But $mu(A^n)=0$ also means that $M(A)=0$ where $M(x)=prod(x^n-lambda_i)$ or equivalently $M(x)=mu(x^n)$.





            Now if the $lambda_i=r_ie^{itheta_i}$ are distinct then so are their k-roots.



            Indeed $(lambda_1)^{1/n}=(lambda_2)^{1/n}impliesbegin{cases} r_1=r_2\ frac{theta_1+2n_1pi}n=frac{theta_2+2n_2pi}n+2mpiend{cases}impliesbegin{cases} r_1=r_2\ theta_1=theta_2pmod{2pi}end{cases}implies lambda_1=lambda_2$



            (We get the result by contraposition).



            Considering now the minimal polynomial of $A$, it has to divide $M(x)$, but we have just seen that because of the hypothesis made on $F$, that $M$ is completely split, and all the linear factors are distinct.



            Coming back to the quote at the beginning, it means that $A$ itself should be diagonalisable.






            share|cite|improve this answer











            $endgroup$



            https://en.wikipedia.org/wiki/Minimal_polynomial_(linear_algebra)



            quote: An endomorphism φ of a finite dimensional vector space over a field F is diagonalizable if and only if its minimal polynomial factors completely over F into distinct linear factors. The fact that there is only one factor X − λ for every eigenvalue λ means that the generalized eigenspace for λ is the same as the eigenspace for λ: every Jordan block has size 1.



            Some quick justifications here: https://www.maths.ed.ac.uk/~tl/minimal.pdf





            So since we supposed $A^n$ diagonalisable, it has such a minimal polynomial $mu(x)=prod(x-lambda_i)$ with distinct $lambda_i$ that satisfies $mu(A^n)=0$



            But $mu(A^n)=0$ also means that $M(A)=0$ where $M(x)=prod(x^n-lambda_i)$ or equivalently $M(x)=mu(x^n)$.





            Now if the $lambda_i=r_ie^{itheta_i}$ are distinct then so are their k-roots.



            Indeed $(lambda_1)^{1/n}=(lambda_2)^{1/n}impliesbegin{cases} r_1=r_2\ frac{theta_1+2n_1pi}n=frac{theta_2+2n_2pi}n+2mpiend{cases}impliesbegin{cases} r_1=r_2\ theta_1=theta_2pmod{2pi}end{cases}implies lambda_1=lambda_2$



            (We get the result by contraposition).



            Considering now the minimal polynomial of $A$, it has to divide $M(x)$, but we have just seen that because of the hypothesis made on $F$, that $M$ is completely split, and all the linear factors are distinct.



            Coming back to the quote at the beginning, it means that $A$ itself should be diagonalisable.







            share|cite|improve this answer














            share|cite|improve this answer



            share|cite|improve this answer








            edited Dec 6 '18 at 0:06

























            answered Dec 5 '18 at 22:31









            zwimzwim

            11.8k730




            11.8k730












            • $begingroup$
              It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
              $endgroup$
              – Atsina
              Dec 5 '18 at 22:56












            • $begingroup$
              Nevermind, I see you proved the contrapositive.
              $endgroup$
              – Atsina
              Dec 5 '18 at 23:00










            • $begingroup$
              yep, I edited. ;
              $endgroup$
              – zwim
              Dec 6 '18 at 0:07










            • $begingroup$
              I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
              $endgroup$
              – Atsina
              Dec 6 '18 at 0:41






            • 2




              $begingroup$
              The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
              $endgroup$
              – zwim
              Dec 6 '18 at 7:37


















            • $begingroup$
              It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
              $endgroup$
              – Atsina
              Dec 5 '18 at 22:56












            • $begingroup$
              Nevermind, I see you proved the contrapositive.
              $endgroup$
              – Atsina
              Dec 5 '18 at 23:00










            • $begingroup$
              yep, I edited. ;
              $endgroup$
              – zwim
              Dec 6 '18 at 0:07










            • $begingroup$
              I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
              $endgroup$
              – Atsina
              Dec 6 '18 at 0:41






            • 2




              $begingroup$
              The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
              $endgroup$
              – zwim
              Dec 6 '18 at 7:37
















            $begingroup$
            It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
            $endgroup$
            – Atsina
            Dec 5 '18 at 22:56






            $begingroup$
            It seems as though your proof that distinct eigenvalues have distinct roots is the converse of what we need. We know that the $lambda_i$'s are distinct eigenvalues of $A^k$, and we need to show that $(lambda_i)^{1/k}$ are distinct eigenvalues of $A$. Also, you changed notation from the original question ($n$ to $k$), but that's not a big deal.
            $endgroup$
            – Atsina
            Dec 5 '18 at 22:56














            $begingroup$
            Nevermind, I see you proved the contrapositive.
            $endgroup$
            – Atsina
            Dec 5 '18 at 23:00




            $begingroup$
            Nevermind, I see you proved the contrapositive.
            $endgroup$
            – Atsina
            Dec 5 '18 at 23:00












            $begingroup$
            yep, I edited. ;
            $endgroup$
            – zwim
            Dec 6 '18 at 0:07




            $begingroup$
            yep, I edited. ;
            $endgroup$
            – zwim
            Dec 6 '18 at 0:07












            $begingroup$
            I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
            $endgroup$
            – Atsina
            Dec 6 '18 at 0:41




            $begingroup$
            I've accepted your answer, but now I'm wondering where the assumption that $A$ is invertible comes into play
            $endgroup$
            – Atsina
            Dec 6 '18 at 0:41




            2




            2




            $begingroup$
            The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
            $endgroup$
            – zwim
            Dec 6 '18 at 7:37




            $begingroup$
            The invertibility is just to avoid these $lambda_i=0$, since then a divisor of $x^n$ is not necessarily $x$ but any power $x^k$ with $k=1..n$. @Guido, yes I detailled only because OP seemed confused about it.
            $endgroup$
            – zwim
            Dec 6 '18 at 7:37











            2












            $begingroup$

            That passage was written by a user with a Japanese user name "TakuyaMurata". I suppose what he meant was actually this: suppose $F$ is an algebraically closed field and $n>0$ is not an integer multiple of $operatorname{char}(F)$. If $A$ is invertible and $A^n$ is diagonalisable over $F$, then:





            1. $f(x)=(x^n-lambda_1)cdots(x^n-lambda_k)$ annihilates $A$, where $lambda_1,ldots,lambda_k$ are the distinct eigenvalues of $A^n$.

            2. Since the $lambda_i$s are distinct, $x^n-lambda_i$ and $x^n-lambda_j$ have not any common root when $ine j$. As $g(x):=x^n-lambda_i$ and $g'(x)=nx^{n-1}$ also have no common roots (because $lambda_ine0$ and $nne0$ --- we have used the assumptions that $A$ is invertible and $n$ is not an integer multiple of $operatorname{char} F$ here), we see that $f$ has not any multiple roots.

            3. As the minimal polynomial $m(x)$ of $A$ divides $f(x)$, $m$ also has not any multiple roots.

            4. Therefore, $m$ is a product of distinct linear factors (note that $m$ splits because $F$ by assumption is algebraically closed). Hence $A$ is diagonalisable over $F$.






            share|cite|improve this answer









            $endgroup$


















              2












              $begingroup$

              That passage was written by a user with a Japanese user name "TakuyaMurata". I suppose what he meant was actually this: suppose $F$ is an algebraically closed field and $n>0$ is not an integer multiple of $operatorname{char}(F)$. If $A$ is invertible and $A^n$ is diagonalisable over $F$, then:





              1. $f(x)=(x^n-lambda_1)cdots(x^n-lambda_k)$ annihilates $A$, where $lambda_1,ldots,lambda_k$ are the distinct eigenvalues of $A^n$.

              2. Since the $lambda_i$s are distinct, $x^n-lambda_i$ and $x^n-lambda_j$ have not any common root when $ine j$. As $g(x):=x^n-lambda_i$ and $g'(x)=nx^{n-1}$ also have no common roots (because $lambda_ine0$ and $nne0$ --- we have used the assumptions that $A$ is invertible and $n$ is not an integer multiple of $operatorname{char} F$ here), we see that $f$ has not any multiple roots.

              3. As the minimal polynomial $m(x)$ of $A$ divides $f(x)$, $m$ also has not any multiple roots.

              4. Therefore, $m$ is a product of distinct linear factors (note that $m$ splits because $F$ by assumption is algebraically closed). Hence $A$ is diagonalisable over $F$.






              share|cite|improve this answer









              $endgroup$
















                2












                2








                2





                $begingroup$

                That passage was written by a user with a Japanese user name "TakuyaMurata". I suppose what he meant was actually this: suppose $F$ is an algebraically closed field and $n>0$ is not an integer multiple of $operatorname{char}(F)$. If $A$ is invertible and $A^n$ is diagonalisable over $F$, then:





                1. $f(x)=(x^n-lambda_1)cdots(x^n-lambda_k)$ annihilates $A$, where $lambda_1,ldots,lambda_k$ are the distinct eigenvalues of $A^n$.

                2. Since the $lambda_i$s are distinct, $x^n-lambda_i$ and $x^n-lambda_j$ have not any common root when $ine j$. As $g(x):=x^n-lambda_i$ and $g'(x)=nx^{n-1}$ also have no common roots (because $lambda_ine0$ and $nne0$ --- we have used the assumptions that $A$ is invertible and $n$ is not an integer multiple of $operatorname{char} F$ here), we see that $f$ has not any multiple roots.

                3. As the minimal polynomial $m(x)$ of $A$ divides $f(x)$, $m$ also has not any multiple roots.

                4. Therefore, $m$ is a product of distinct linear factors (note that $m$ splits because $F$ by assumption is algebraically closed). Hence $A$ is diagonalisable over $F$.






                share|cite|improve this answer









                $endgroup$



                That passage was written by a user with a Japanese user name "TakuyaMurata". I suppose what he meant was actually this: suppose $F$ is an algebraically closed field and $n>0$ is not an integer multiple of $operatorname{char}(F)$. If $A$ is invertible and $A^n$ is diagonalisable over $F$, then:





                1. $f(x)=(x^n-lambda_1)cdots(x^n-lambda_k)$ annihilates $A$, where $lambda_1,ldots,lambda_k$ are the distinct eigenvalues of $A^n$.

                2. Since the $lambda_i$s are distinct, $x^n-lambda_i$ and $x^n-lambda_j$ have not any common root when $ine j$. As $g(x):=x^n-lambda_i$ and $g'(x)=nx^{n-1}$ also have no common roots (because $lambda_ine0$ and $nne0$ --- we have used the assumptions that $A$ is invertible and $n$ is not an integer multiple of $operatorname{char} F$ here), we see that $f$ has not any multiple roots.

                3. As the minimal polynomial $m(x)$ of $A$ divides $f(x)$, $m$ also has not any multiple roots.

                4. Therefore, $m$ is a product of distinct linear factors (note that $m$ splits because $F$ by assumption is algebraically closed). Hence $A$ is diagonalisable over $F$.







                share|cite|improve this answer












                share|cite|improve this answer



                share|cite|improve this answer










                answered Dec 5 '18 at 23:01









                user1551user1551

                72.4k566127




                72.4k566127






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3027664%2fif-a-is-invertible-and-an-is-diagonalizable-then-a-is-diagonalizable%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Probability when a professor distributes a quiz and homework assignment to a class of n students.

                    Aardman Animations

                    Are they similar matrix