Can we change the singular values of a matrix to create a positive definite matrix?












0












$begingroup$


Given a matrix, is there a way, by changing its singular values, to transform it into a positive definite matrix?



I do a Singular Value Decomposition. I look at the singular values. If the matrix is not positive definite, then I change them to create a positive definite matrix.



How would I change them, and is there any theory saying how close the resulting matrix would be from the original matrix? No preference for norm.



A comment suggested that to the diagonal matrix with the singular values, I add $mu I$, with $mu>0$. This was my original proceadure, however for the following matrix this doesn't seem to work.



$$A=left(
begin{array}{cccc}
1.24215 & 1.6337 & -0.650968 & 0.635641 \
0.183443 & 0.0432606 & 0.130042 & -0.318818 \
-0.204386 & -0.435348 & -0.966572 & 0.506948 \
-0.321633 & 0.286963 & 0.587576 & 1.37028 \
end{array}
right)$$



I add the identity matrix to the Diagonal matrix with singular values, i.e. I do $U(Diag+I)V^intercal$ and I get:



$$left(
begin{array}{cccc}
1.79872 & 2.36397 & -0.976203 & 0.861808 \
1.00304 & -0.442107 & 0.434236 & -0.331093 \
-0.197866 & -0.914375 & -1.73106 & 0.938273 \
-0.457474 & 0.327607 & 1.05367 & 2.24358 \
end{array}
right)$$



Which is not Pos.Def.



2nd Addition:



I take the original matrix $A$, and compute the diagonal decomposition of $(A+A^T)/2 =P text{Diag}(lambda_i) P^{-1}$. I transform all the non-positive eigenvalues into positive eigenvalues, by $max(lambda_i,mu)$ where $mu$ is a positive number, and $lambda_i$ is the i-th eigenvalue. Then I give as output the new matrix $P text{Diag}(max(lambda_i,mu)) P^{-1}$.



This seems to give a PD matrix.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    What do you mean by "changing a singular value of a matrix"?
    $endgroup$
    – 5xum
    Dec 3 '18 at 10:23










  • $begingroup$
    @5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 10:25








  • 1




    $begingroup$
    You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
    $endgroup$
    – Greg Martin
    Dec 3 '18 at 10:31










  • $begingroup$
    A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
    $endgroup$
    – Damien
    Dec 3 '18 at 10:32












  • $begingroup$
    If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
    $endgroup$
    – Damien
    Dec 3 '18 at 10:42
















0












$begingroup$


Given a matrix, is there a way, by changing its singular values, to transform it into a positive definite matrix?



I do a Singular Value Decomposition. I look at the singular values. If the matrix is not positive definite, then I change them to create a positive definite matrix.



How would I change them, and is there any theory saying how close the resulting matrix would be from the original matrix? No preference for norm.



A comment suggested that to the diagonal matrix with the singular values, I add $mu I$, with $mu>0$. This was my original proceadure, however for the following matrix this doesn't seem to work.



$$A=left(
begin{array}{cccc}
1.24215 & 1.6337 & -0.650968 & 0.635641 \
0.183443 & 0.0432606 & 0.130042 & -0.318818 \
-0.204386 & -0.435348 & -0.966572 & 0.506948 \
-0.321633 & 0.286963 & 0.587576 & 1.37028 \
end{array}
right)$$



I add the identity matrix to the Diagonal matrix with singular values, i.e. I do $U(Diag+I)V^intercal$ and I get:



$$left(
begin{array}{cccc}
1.79872 & 2.36397 & -0.976203 & 0.861808 \
1.00304 & -0.442107 & 0.434236 & -0.331093 \
-0.197866 & -0.914375 & -1.73106 & 0.938273 \
-0.457474 & 0.327607 & 1.05367 & 2.24358 \
end{array}
right)$$



Which is not Pos.Def.



2nd Addition:



I take the original matrix $A$, and compute the diagonal decomposition of $(A+A^T)/2 =P text{Diag}(lambda_i) P^{-1}$. I transform all the non-positive eigenvalues into positive eigenvalues, by $max(lambda_i,mu)$ where $mu$ is a positive number, and $lambda_i$ is the i-th eigenvalue. Then I give as output the new matrix $P text{Diag}(max(lambda_i,mu)) P^{-1}$.



This seems to give a PD matrix.










share|cite|improve this question











$endgroup$








  • 1




    $begingroup$
    What do you mean by "changing a singular value of a matrix"?
    $endgroup$
    – 5xum
    Dec 3 '18 at 10:23










  • $begingroup$
    @5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 10:25








  • 1




    $begingroup$
    You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
    $endgroup$
    – Greg Martin
    Dec 3 '18 at 10:31










  • $begingroup$
    A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
    $endgroup$
    – Damien
    Dec 3 '18 at 10:32












  • $begingroup$
    If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
    $endgroup$
    – Damien
    Dec 3 '18 at 10:42














0












0








0





$begingroup$


Given a matrix, is there a way, by changing its singular values, to transform it into a positive definite matrix?



I do a Singular Value Decomposition. I look at the singular values. If the matrix is not positive definite, then I change them to create a positive definite matrix.



How would I change them, and is there any theory saying how close the resulting matrix would be from the original matrix? No preference for norm.



A comment suggested that to the diagonal matrix with the singular values, I add $mu I$, with $mu>0$. This was my original proceadure, however for the following matrix this doesn't seem to work.



$$A=left(
begin{array}{cccc}
1.24215 & 1.6337 & -0.650968 & 0.635641 \
0.183443 & 0.0432606 & 0.130042 & -0.318818 \
-0.204386 & -0.435348 & -0.966572 & 0.506948 \
-0.321633 & 0.286963 & 0.587576 & 1.37028 \
end{array}
right)$$



I add the identity matrix to the Diagonal matrix with singular values, i.e. I do $U(Diag+I)V^intercal$ and I get:



$$left(
begin{array}{cccc}
1.79872 & 2.36397 & -0.976203 & 0.861808 \
1.00304 & -0.442107 & 0.434236 & -0.331093 \
-0.197866 & -0.914375 & -1.73106 & 0.938273 \
-0.457474 & 0.327607 & 1.05367 & 2.24358 \
end{array}
right)$$



Which is not Pos.Def.



2nd Addition:



I take the original matrix $A$, and compute the diagonal decomposition of $(A+A^T)/2 =P text{Diag}(lambda_i) P^{-1}$. I transform all the non-positive eigenvalues into positive eigenvalues, by $max(lambda_i,mu)$ where $mu$ is a positive number, and $lambda_i$ is the i-th eigenvalue. Then I give as output the new matrix $P text{Diag}(max(lambda_i,mu)) P^{-1}$.



This seems to give a PD matrix.










share|cite|improve this question











$endgroup$




Given a matrix, is there a way, by changing its singular values, to transform it into a positive definite matrix?



I do a Singular Value Decomposition. I look at the singular values. If the matrix is not positive definite, then I change them to create a positive definite matrix.



How would I change them, and is there any theory saying how close the resulting matrix would be from the original matrix? No preference for norm.



A comment suggested that to the diagonal matrix with the singular values, I add $mu I$, with $mu>0$. This was my original proceadure, however for the following matrix this doesn't seem to work.



$$A=left(
begin{array}{cccc}
1.24215 & 1.6337 & -0.650968 & 0.635641 \
0.183443 & 0.0432606 & 0.130042 & -0.318818 \
-0.204386 & -0.435348 & -0.966572 & 0.506948 \
-0.321633 & 0.286963 & 0.587576 & 1.37028 \
end{array}
right)$$



I add the identity matrix to the Diagonal matrix with singular values, i.e. I do $U(Diag+I)V^intercal$ and I get:



$$left(
begin{array}{cccc}
1.79872 & 2.36397 & -0.976203 & 0.861808 \
1.00304 & -0.442107 & 0.434236 & -0.331093 \
-0.197866 & -0.914375 & -1.73106 & 0.938273 \
-0.457474 & 0.327607 & 1.05367 & 2.24358 \
end{array}
right)$$



Which is not Pos.Def.



2nd Addition:



I take the original matrix $A$, and compute the diagonal decomposition of $(A+A^T)/2 =P text{Diag}(lambda_i) P^{-1}$. I transform all the non-positive eigenvalues into positive eigenvalues, by $max(lambda_i,mu)$ where $mu$ is a positive number, and $lambda_i$ is the i-th eigenvalue. Then I give as output the new matrix $P text{Diag}(max(lambda_i,mu)) P^{-1}$.



This seems to give a PD matrix.







linear-algebra matrices






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 3 '18 at 12:02







An old man in the sea.

















asked Dec 3 '18 at 10:22









An old man in the sea.An old man in the sea.

1,62711032




1,62711032








  • 1




    $begingroup$
    What do you mean by "changing a singular value of a matrix"?
    $endgroup$
    – 5xum
    Dec 3 '18 at 10:23










  • $begingroup$
    @5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 10:25








  • 1




    $begingroup$
    You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
    $endgroup$
    – Greg Martin
    Dec 3 '18 at 10:31










  • $begingroup$
    A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
    $endgroup$
    – Damien
    Dec 3 '18 at 10:32












  • $begingroup$
    If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
    $endgroup$
    – Damien
    Dec 3 '18 at 10:42














  • 1




    $begingroup$
    What do you mean by "changing a singular value of a matrix"?
    $endgroup$
    – 5xum
    Dec 3 '18 at 10:23










  • $begingroup$
    @5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 10:25








  • 1




    $begingroup$
    You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
    $endgroup$
    – Greg Martin
    Dec 3 '18 at 10:31










  • $begingroup$
    A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
    $endgroup$
    – Damien
    Dec 3 '18 at 10:32












  • $begingroup$
    If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
    $endgroup$
    – Damien
    Dec 3 '18 at 10:42








1




1




$begingroup$
What do you mean by "changing a singular value of a matrix"?
$endgroup$
– 5xum
Dec 3 '18 at 10:23




$begingroup$
What do you mean by "changing a singular value of a matrix"?
$endgroup$
– 5xum
Dec 3 '18 at 10:23












$begingroup$
@5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
$endgroup$
– An old man in the sea.
Dec 3 '18 at 10:25






$begingroup$
@5xum I do a Sing.Val. decomposition, I look at the singular values. If the matrix is not pos.def., then I changed them to create a pos.def. matrix.
$endgroup$
– An old man in the sea.
Dec 3 '18 at 10:25






1




1




$begingroup$
You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
$endgroup$
– Greg Martin
Dec 3 '18 at 10:31




$begingroup$
You can transform any matrix into a positive definite matrix: just change the original matrix into the identity matrix. Without being more precise about what you mean by "transform", the question isn't really meaningful. (For example, you outline a procedure in your comment; does that satisfy your requirements?)
$endgroup$
– Greg Martin
Dec 3 '18 at 10:31












$begingroup$
A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
$endgroup$
– Damien
Dec 3 '18 at 10:32






$begingroup$
A way is to add $mu I$ to your matrix, with $I$ the identity matrix, such that $lambda_i + mu > 0$ for all $lambda_i$, the $lambda_i$ being the singular values
$endgroup$
– Damien
Dec 3 '18 at 10:32














$begingroup$
If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
$endgroup$
– Damien
Dec 3 '18 at 10:42




$begingroup$
If you apply you modified matrix to a given $x$ vector, the error will depend on this vector. It is easy to calculate the error for such a given vector. You can deduce the error for the worst case ...
$endgroup$
– Damien
Dec 3 '18 at 10:42










1 Answer
1






active

oldest

votes


















0












$begingroup$

Among other things, it depends what definition of positive definite you use. One is that all the eigenvalues are positive, another is that $langle A v, v rangle geq 0$ for vectors $v$ over the complex field.



In either case, you need positive eigenvalues. Singular values are nonnegative numbers, the $i$th largest of which is greater than or equal to the absolute value of the $i$th largest eigenvalue. Your singular values can't possibly detect if your matrix has positive eigenvalues, and you can't modify them to get positive eigenvalues.



The second definition requires that your matrix is self-adjiont. In this case, the eigenvalues are very useful. Every self-adjoint matrix is unitarily diagonalizable as $A=UDU^*$, where $D$ is a diagonal matrix with the eigenvalues of $A$ as entries. Simply replace the negative entries of $D$ with zero and you get the best positive (semi)-definite approximation to $A$ (in the two norm).



In the non-self-adjoint case you can also modify your eigenvalues but, its not as clear to me how you would get a best approximation. One great thing about unitaries is that the two norm is invariant under conjugation by them.



Note that no negative definite matrix has a best strictly positive definite approximation since the set of strictly positive definite matrices is not closed.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:33










  • $begingroup$
    @JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 11:39










  • $begingroup$
    I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:50










  • $begingroup$
    Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
    $endgroup$
    – Damien
    Dec 3 '18 at 11:51










  • $begingroup$
    @JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
    $endgroup$
    – Eric
    Dec 3 '18 at 12:01













Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023880%2fcan-we-change-the-singular-values-of-a-matrix-to-create-a-positive-definite-matr%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









0












$begingroup$

Among other things, it depends what definition of positive definite you use. One is that all the eigenvalues are positive, another is that $langle A v, v rangle geq 0$ for vectors $v$ over the complex field.



In either case, you need positive eigenvalues. Singular values are nonnegative numbers, the $i$th largest of which is greater than or equal to the absolute value of the $i$th largest eigenvalue. Your singular values can't possibly detect if your matrix has positive eigenvalues, and you can't modify them to get positive eigenvalues.



The second definition requires that your matrix is self-adjiont. In this case, the eigenvalues are very useful. Every self-adjoint matrix is unitarily diagonalizable as $A=UDU^*$, where $D$ is a diagonal matrix with the eigenvalues of $A$ as entries. Simply replace the negative entries of $D$ with zero and you get the best positive (semi)-definite approximation to $A$ (in the two norm).



In the non-self-adjoint case you can also modify your eigenvalues but, its not as clear to me how you would get a best approximation. One great thing about unitaries is that the two norm is invariant under conjugation by them.



Note that no negative definite matrix has a best strictly positive definite approximation since the set of strictly positive definite matrices is not closed.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:33










  • $begingroup$
    @JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 11:39










  • $begingroup$
    I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:50










  • $begingroup$
    Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
    $endgroup$
    – Damien
    Dec 3 '18 at 11:51










  • $begingroup$
    @JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
    $endgroup$
    – Eric
    Dec 3 '18 at 12:01


















0












$begingroup$

Among other things, it depends what definition of positive definite you use. One is that all the eigenvalues are positive, another is that $langle A v, v rangle geq 0$ for vectors $v$ over the complex field.



In either case, you need positive eigenvalues. Singular values are nonnegative numbers, the $i$th largest of which is greater than or equal to the absolute value of the $i$th largest eigenvalue. Your singular values can't possibly detect if your matrix has positive eigenvalues, and you can't modify them to get positive eigenvalues.



The second definition requires that your matrix is self-adjiont. In this case, the eigenvalues are very useful. Every self-adjoint matrix is unitarily diagonalizable as $A=UDU^*$, where $D$ is a diagonal matrix with the eigenvalues of $A$ as entries. Simply replace the negative entries of $D$ with zero and you get the best positive (semi)-definite approximation to $A$ (in the two norm).



In the non-self-adjoint case you can also modify your eigenvalues but, its not as clear to me how you would get a best approximation. One great thing about unitaries is that the two norm is invariant under conjugation by them.



Note that no negative definite matrix has a best strictly positive definite approximation since the set of strictly positive definite matrices is not closed.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:33










  • $begingroup$
    @JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 11:39










  • $begingroup$
    I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:50










  • $begingroup$
    Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
    $endgroup$
    – Damien
    Dec 3 '18 at 11:51










  • $begingroup$
    @JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
    $endgroup$
    – Eric
    Dec 3 '18 at 12:01
















0












0








0





$begingroup$

Among other things, it depends what definition of positive definite you use. One is that all the eigenvalues are positive, another is that $langle A v, v rangle geq 0$ for vectors $v$ over the complex field.



In either case, you need positive eigenvalues. Singular values are nonnegative numbers, the $i$th largest of which is greater than or equal to the absolute value of the $i$th largest eigenvalue. Your singular values can't possibly detect if your matrix has positive eigenvalues, and you can't modify them to get positive eigenvalues.



The second definition requires that your matrix is self-adjiont. In this case, the eigenvalues are very useful. Every self-adjoint matrix is unitarily diagonalizable as $A=UDU^*$, where $D$ is a diagonal matrix with the eigenvalues of $A$ as entries. Simply replace the negative entries of $D$ with zero and you get the best positive (semi)-definite approximation to $A$ (in the two norm).



In the non-self-adjoint case you can also modify your eigenvalues but, its not as clear to me how you would get a best approximation. One great thing about unitaries is that the two norm is invariant under conjugation by them.



Note that no negative definite matrix has a best strictly positive definite approximation since the set of strictly positive definite matrices is not closed.






share|cite|improve this answer











$endgroup$



Among other things, it depends what definition of positive definite you use. One is that all the eigenvalues are positive, another is that $langle A v, v rangle geq 0$ for vectors $v$ over the complex field.



In either case, you need positive eigenvalues. Singular values are nonnegative numbers, the $i$th largest of which is greater than or equal to the absolute value of the $i$th largest eigenvalue. Your singular values can't possibly detect if your matrix has positive eigenvalues, and you can't modify them to get positive eigenvalues.



The second definition requires that your matrix is self-adjiont. In this case, the eigenvalues are very useful. Every self-adjoint matrix is unitarily diagonalizable as $A=UDU^*$, where $D$ is a diagonal matrix with the eigenvalues of $A$ as entries. Simply replace the negative entries of $D$ with zero and you get the best positive (semi)-definite approximation to $A$ (in the two norm).



In the non-self-adjoint case you can also modify your eigenvalues but, its not as clear to me how you would get a best approximation. One great thing about unitaries is that the two norm is invariant under conjugation by them.



Note that no negative definite matrix has a best strictly positive definite approximation since the set of strictly positive definite matrices is not closed.







share|cite|improve this answer














share|cite|improve this answer



share|cite|improve this answer








edited Dec 3 '18 at 15:07

























answered Dec 3 '18 at 11:13









EricEric

2068




2068












  • $begingroup$
    The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:33










  • $begingroup$
    @JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 11:39










  • $begingroup$
    I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:50










  • $begingroup$
    Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
    $endgroup$
    – Damien
    Dec 3 '18 at 11:51










  • $begingroup$
    @JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
    $endgroup$
    – Eric
    Dec 3 '18 at 12:01




















  • $begingroup$
    The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:33










  • $begingroup$
    @JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
    $endgroup$
    – An old man in the sea.
    Dec 3 '18 at 11:39










  • $begingroup$
    I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
    $endgroup$
    – John Hughes
    Dec 3 '18 at 11:50










  • $begingroup$
    Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
    $endgroup$
    – Damien
    Dec 3 '18 at 11:51










  • $begingroup$
    @JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
    $endgroup$
    – Eric
    Dec 3 '18 at 12:01


















$begingroup$
The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
$endgroup$
– John Hughes
Dec 3 '18 at 11:33




$begingroup$
The statement "a positive definite [matrix] must be self-adjoint" is wrong. Consider $A = pmatrix{1 & 1 \ 0 & 1}$, whose eigenvalues are both $+1$. It's not the case that $A^t = A$. So it's not self-adjoint. In general, a matrix is pos definite iff its symmetric part, $frac{A + A^t}{2}$ has all positive eigenvalues. See, for instance, mathworld.wolfram.com/PositiveDefiniteMatrix.html.
$endgroup$
– John Hughes
Dec 3 '18 at 11:33












$begingroup$
@JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
$endgroup$
– An old man in the sea.
Dec 3 '18 at 11:39




$begingroup$
@JohnHughes, so if I use the symmetric part, calculate the eigenvalues, and put them positive, then I'm sure to get PD matrix? All symmetric parts have real eigenvalues?
$endgroup$
– An old man in the sea.
Dec 3 '18 at 11:39












$begingroup$
I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
$endgroup$
– John Hughes
Dec 3 '18 at 11:50




$begingroup$
I wasn't trying to answer the original question; I was merely pointing out that this answer was based on a mistaken claim. Like Greg Martin, I don't really understand your question, so I can't say whether your description above, which is a little vague, is a correct answer.
$endgroup$
– John Hughes
Dec 3 '18 at 11:50












$begingroup$
Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
$endgroup$
– Damien
Dec 3 '18 at 11:51




$begingroup$
Let us decompose the matrix in the sum of a symmetric part and an asymmetric part. Modify the symmetric part as proposed in answers. Don't modify the asymmetric part. Hope it will help this time. Sorry
$endgroup$
– Damien
Dec 3 '18 at 11:51












$begingroup$
@JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
$endgroup$
– Eric
Dec 3 '18 at 12:01






$begingroup$
@JohnHughes It really depends what you mean by a positive definite matrix. I would not consider your example positive definite since if you let $v=(i, 1)$ then $langle A v, vrangle = 2-i$. I should have clarified I was using this definition, and thinking over $mathbb{C}$ but the author also should clarify what they mean by positive definite.
$endgroup$
– Eric
Dec 3 '18 at 12:01




















draft saved

draft discarded




















































Thanks for contributing an answer to Mathematics Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3023880%2fcan-we-change-the-singular-values-of-a-matrix-to-create-a-positive-definite-matr%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Probability when a professor distributes a quiz and homework assignment to a class of n students.

Aardman Animations

Are they similar matrix