Decomposing a Vector Space into a direct sum of Generalized Eigenspaces
up vote
2
down vote
favorite
I've seen a proof that doesn't involve any induction, but now I am trying to do this inductively since it is bothering me.
If $T$ is a linear operator over a finite-dimensional vector space $V$ and the characteristic polynomial $phi(t)$ of $T$ factors into
begin{gather}
phi(t) = prod^n_{i = 1}(t - lambda_i)^{m_i}
end{gather}
where each $lambda_i$ is an eigenvalue of $T.$ Then for each generalized eigenspace $K_{lambda_i}$ we have that
begin{gather}
V = bigoplus^n_{i = 1}K_{lambda_i}.
end{gather}
My idea was to use a base case of two eigenvalues. Our base case will show that when $phi(t) = (t - lambda_1)^{m_1}(t - lambda_2)^{m_2}$ we have that. If we consider $f(t) = (t - lambda_1)^{m_1}, g(t) = (t - lambda_2)^{m_2},$ we see that the following comes from the fact that $f,g$ are both co-prime. We have that
begin{gather}
V = N(f(t)) oplus R(f(t)).
end{gather}
Additionally, we note that $N(f(t)) = K_{lambda_1}$ and we have that $R(f(t)) = N(g(t)) = K_{lambda_2}.$ So we have that
begin{gather}
V = K_{lambda_1} oplus K_{lambda_2}.
end{gather}
So, this is where I am not sure where to continue. I don't know what to suppose as my inductive hypothesis. My ultimate goal is to express the final polynomial
begin{gather}
phi(t) = (t - lambda_1)^{m_1} cdots (t - lambda_n)^{m_n}.
end{gather}
we can express one polynomial as $f(t) = (t - lambda_1)^{m_1}$ and $g(t) = (t - lambda_2)^{m_2} cdots (t - lambda_n)^{m_n}.$ Then with the discussion above argue that
begin{gather}
R(f(t)) = bigoplus^n_{i = 2} K_{lambda_i}.
end{gather}
and then we will have that $V$ is a direct sum of these generalized eigenspaces. But I am not sure what the inductive step here is.
linear-algebra
add a comment |
up vote
2
down vote
favorite
I've seen a proof that doesn't involve any induction, but now I am trying to do this inductively since it is bothering me.
If $T$ is a linear operator over a finite-dimensional vector space $V$ and the characteristic polynomial $phi(t)$ of $T$ factors into
begin{gather}
phi(t) = prod^n_{i = 1}(t - lambda_i)^{m_i}
end{gather}
where each $lambda_i$ is an eigenvalue of $T.$ Then for each generalized eigenspace $K_{lambda_i}$ we have that
begin{gather}
V = bigoplus^n_{i = 1}K_{lambda_i}.
end{gather}
My idea was to use a base case of two eigenvalues. Our base case will show that when $phi(t) = (t - lambda_1)^{m_1}(t - lambda_2)^{m_2}$ we have that. If we consider $f(t) = (t - lambda_1)^{m_1}, g(t) = (t - lambda_2)^{m_2},$ we see that the following comes from the fact that $f,g$ are both co-prime. We have that
begin{gather}
V = N(f(t)) oplus R(f(t)).
end{gather}
Additionally, we note that $N(f(t)) = K_{lambda_1}$ and we have that $R(f(t)) = N(g(t)) = K_{lambda_2}.$ So we have that
begin{gather}
V = K_{lambda_1} oplus K_{lambda_2}.
end{gather}
So, this is where I am not sure where to continue. I don't know what to suppose as my inductive hypothesis. My ultimate goal is to express the final polynomial
begin{gather}
phi(t) = (t - lambda_1)^{m_1} cdots (t - lambda_n)^{m_n}.
end{gather}
we can express one polynomial as $f(t) = (t - lambda_1)^{m_1}$ and $g(t) = (t - lambda_2)^{m_2} cdots (t - lambda_n)^{m_n}.$ Then with the discussion above argue that
begin{gather}
R(f(t)) = bigoplus^n_{i = 2} K_{lambda_i}.
end{gather}
and then we will have that $V$ is a direct sum of these generalized eigenspaces. But I am not sure what the inductive step here is.
linear-algebra
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27
add a comment |
up vote
2
down vote
favorite
up vote
2
down vote
favorite
I've seen a proof that doesn't involve any induction, but now I am trying to do this inductively since it is bothering me.
If $T$ is a linear operator over a finite-dimensional vector space $V$ and the characteristic polynomial $phi(t)$ of $T$ factors into
begin{gather}
phi(t) = prod^n_{i = 1}(t - lambda_i)^{m_i}
end{gather}
where each $lambda_i$ is an eigenvalue of $T.$ Then for each generalized eigenspace $K_{lambda_i}$ we have that
begin{gather}
V = bigoplus^n_{i = 1}K_{lambda_i}.
end{gather}
My idea was to use a base case of two eigenvalues. Our base case will show that when $phi(t) = (t - lambda_1)^{m_1}(t - lambda_2)^{m_2}$ we have that. If we consider $f(t) = (t - lambda_1)^{m_1}, g(t) = (t - lambda_2)^{m_2},$ we see that the following comes from the fact that $f,g$ are both co-prime. We have that
begin{gather}
V = N(f(t)) oplus R(f(t)).
end{gather}
Additionally, we note that $N(f(t)) = K_{lambda_1}$ and we have that $R(f(t)) = N(g(t)) = K_{lambda_2}.$ So we have that
begin{gather}
V = K_{lambda_1} oplus K_{lambda_2}.
end{gather}
So, this is where I am not sure where to continue. I don't know what to suppose as my inductive hypothesis. My ultimate goal is to express the final polynomial
begin{gather}
phi(t) = (t - lambda_1)^{m_1} cdots (t - lambda_n)^{m_n}.
end{gather}
we can express one polynomial as $f(t) = (t - lambda_1)^{m_1}$ and $g(t) = (t - lambda_2)^{m_2} cdots (t - lambda_n)^{m_n}.$ Then with the discussion above argue that
begin{gather}
R(f(t)) = bigoplus^n_{i = 2} K_{lambda_i}.
end{gather}
and then we will have that $V$ is a direct sum of these generalized eigenspaces. But I am not sure what the inductive step here is.
linear-algebra
I've seen a proof that doesn't involve any induction, but now I am trying to do this inductively since it is bothering me.
If $T$ is a linear operator over a finite-dimensional vector space $V$ and the characteristic polynomial $phi(t)$ of $T$ factors into
begin{gather}
phi(t) = prod^n_{i = 1}(t - lambda_i)^{m_i}
end{gather}
where each $lambda_i$ is an eigenvalue of $T.$ Then for each generalized eigenspace $K_{lambda_i}$ we have that
begin{gather}
V = bigoplus^n_{i = 1}K_{lambda_i}.
end{gather}
My idea was to use a base case of two eigenvalues. Our base case will show that when $phi(t) = (t - lambda_1)^{m_1}(t - lambda_2)^{m_2}$ we have that. If we consider $f(t) = (t - lambda_1)^{m_1}, g(t) = (t - lambda_2)^{m_2},$ we see that the following comes from the fact that $f,g$ are both co-prime. We have that
begin{gather}
V = N(f(t)) oplus R(f(t)).
end{gather}
Additionally, we note that $N(f(t)) = K_{lambda_1}$ and we have that $R(f(t)) = N(g(t)) = K_{lambda_2}.$ So we have that
begin{gather}
V = K_{lambda_1} oplus K_{lambda_2}.
end{gather}
So, this is where I am not sure where to continue. I don't know what to suppose as my inductive hypothesis. My ultimate goal is to express the final polynomial
begin{gather}
phi(t) = (t - lambda_1)^{m_1} cdots (t - lambda_n)^{m_n}.
end{gather}
we can express one polynomial as $f(t) = (t - lambda_1)^{m_1}$ and $g(t) = (t - lambda_2)^{m_2} cdots (t - lambda_n)^{m_n}.$ Then with the discussion above argue that
begin{gather}
R(f(t)) = bigoplus^n_{i = 2} K_{lambda_i}.
end{gather}
and then we will have that $V$ is a direct sum of these generalized eigenspaces. But I am not sure what the inductive step here is.
linear-algebra
linear-algebra
asked Nov 13 at 22:11
Josabanks
434
434
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27
add a comment |
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27
add a comment |
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2997413%2fdecomposing-a-vector-space-into-a-direct-sum-of-generalized-eigenspaces%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
My feeling here is that you would start with/show $V=K_{lambda_{1}}oplus S$, for some $S$ and then proceed to decompose $S$ .
– AnyAD
Nov 13 at 22:27