Functional Analysis: Cannot find a linear combination that involves large scalars but represents a small...
$begingroup$
Theorem:
Let ${x_1,ldots,x_n}$ be a linearly independent set of vectors in a normed space $X$ (of any dimension). Then there is a number $c>0$ such that for every choice of scalars $a_1,ldots,a_n$:
$$leftlVert a_1x_1+a_2x_2+cdots+a_nx_n rightrVertge c left( lvert a_1 rvert + cdots + lvert a_n rvert right)$$
My question is: why is this theorem relevant? It seems that I could pick an extremely small $c$ value close to zero that would satisfy the final equation. I realize that this is likely wrong but I hoping to get some intuition on what the theorem seems to be getting at. The author gets at it a bit with the following statement:
Very roughly speaking it states that in the case of linear independence of vectors we cannot find a linear combination that involves large scalars but represents a small vector.
But the fact that a $c$ value is needed seems to suggest that the $c$ value is needed to reduce the total value of the the scalars (if $c<1$) and so it would seem that we can find cases where we have small vectors with large scalars but we simply do away with that by imposing a constant $c$ to make the scalars "less big" (the RHS of the inequality).
This lemma is from Kreyszig's Introductory Functional Analysis book.
functional-analysis
$endgroup$
|
show 2 more comments
$begingroup$
Theorem:
Let ${x_1,ldots,x_n}$ be a linearly independent set of vectors in a normed space $X$ (of any dimension). Then there is a number $c>0$ such that for every choice of scalars $a_1,ldots,a_n$:
$$leftlVert a_1x_1+a_2x_2+cdots+a_nx_n rightrVertge c left( lvert a_1 rvert + cdots + lvert a_n rvert right)$$
My question is: why is this theorem relevant? It seems that I could pick an extremely small $c$ value close to zero that would satisfy the final equation. I realize that this is likely wrong but I hoping to get some intuition on what the theorem seems to be getting at. The author gets at it a bit with the following statement:
Very roughly speaking it states that in the case of linear independence of vectors we cannot find a linear combination that involves large scalars but represents a small vector.
But the fact that a $c$ value is needed seems to suggest that the $c$ value is needed to reduce the total value of the the scalars (if $c<1$) and so it would seem that we can find cases where we have small vectors with large scalars but we simply do away with that by imposing a constant $c$ to make the scalars "less big" (the RHS of the inequality).
This lemma is from Kreyszig's Introductory Functional Analysis book.
functional-analysis
$endgroup$
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
1
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43
|
show 2 more comments
$begingroup$
Theorem:
Let ${x_1,ldots,x_n}$ be a linearly independent set of vectors in a normed space $X$ (of any dimension). Then there is a number $c>0$ such that for every choice of scalars $a_1,ldots,a_n$:
$$leftlVert a_1x_1+a_2x_2+cdots+a_nx_n rightrVertge c left( lvert a_1 rvert + cdots + lvert a_n rvert right)$$
My question is: why is this theorem relevant? It seems that I could pick an extremely small $c$ value close to zero that would satisfy the final equation. I realize that this is likely wrong but I hoping to get some intuition on what the theorem seems to be getting at. The author gets at it a bit with the following statement:
Very roughly speaking it states that in the case of linear independence of vectors we cannot find a linear combination that involves large scalars but represents a small vector.
But the fact that a $c$ value is needed seems to suggest that the $c$ value is needed to reduce the total value of the the scalars (if $c<1$) and so it would seem that we can find cases where we have small vectors with large scalars but we simply do away with that by imposing a constant $c$ to make the scalars "less big" (the RHS of the inequality).
This lemma is from Kreyszig's Introductory Functional Analysis book.
functional-analysis
$endgroup$
Theorem:
Let ${x_1,ldots,x_n}$ be a linearly independent set of vectors in a normed space $X$ (of any dimension). Then there is a number $c>0$ such that for every choice of scalars $a_1,ldots,a_n$:
$$leftlVert a_1x_1+a_2x_2+cdots+a_nx_n rightrVertge c left( lvert a_1 rvert + cdots + lvert a_n rvert right)$$
My question is: why is this theorem relevant? It seems that I could pick an extremely small $c$ value close to zero that would satisfy the final equation. I realize that this is likely wrong but I hoping to get some intuition on what the theorem seems to be getting at. The author gets at it a bit with the following statement:
Very roughly speaking it states that in the case of linear independence of vectors we cannot find a linear combination that involves large scalars but represents a small vector.
But the fact that a $c$ value is needed seems to suggest that the $c$ value is needed to reduce the total value of the the scalars (if $c<1$) and so it would seem that we can find cases where we have small vectors with large scalars but we simply do away with that by imposing a constant $c$ to make the scalars "less big" (the RHS of the inequality).
This lemma is from Kreyszig's Introductory Functional Analysis book.
functional-analysis
functional-analysis
edited Jan 2 at 5:02
zipirovich
11.4k11731
11.4k11731
asked Jan 2 at 4:06
H_1317H_1317
1689
1689
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
1
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43
|
show 2 more comments
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
1
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
1
1
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43
|
show 2 more comments
2 Answers
2
active
oldest
votes
$begingroup$
The fact that you are saying "... a $c$ value is needed", and "Imposing a constant $c$ ..." makes me think that your are looking at this the wrong way round. The theorem says (rewriting it a little bit) that if someone gives us a linear independent set ${x_1, ..., x_n}$ then the ratio $$leftlVert a_1x_1 + a_2x_2 + ... + a_nx_n rightrVert /(lvert a_1 rvert+ ... + lvert a_n rvert)$$ stays bounded away from zero by some definite amount (where we skip the case of all $a_i = 0$).
Simple linear independence says that the ratio never equals zero. The theorem gives a stronger result. If we assume that we take the $c$ in your statement to be as large as possible, i.e. set it equal to the $min$ of the ratio, then we can see $c$ as a measure of how close to linear dependence our set of vectors is.
To me it looks to be an analog of the "angle between two vectors" that doesn't use an inner product. Let's look at the simplest possible example: two unit vectors in $mathbb{R}^2$. In this case the ratio can be shown to equal $$sqrt{dfrac{a_1^2 + a_2^2 + 2a_1a_2 cos(theta)}{a_1^2 + a_2^2}},$$ where $cos( theta) = langle x_1, x_2rangle$. This has a minimum of $sqrt{1+cos theta}$, the value of $c$ in your theorem. Consider what happens to vectors similar to those used by SmileyCraft: $(1,0)$ and $(cos(alpha), sin(alpha))$ for $alpha$ going from $pi/2$ to $pi$. $cos(theta)$ goes from $0$ to $-1$ and your value of $c$ goes from $1$ down to $0$ - as the vectors get closer to being linearly dependent the value of c gets closer to $0$.
$endgroup$
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
add a comment |
$begingroup$
It is interesting to note that the converse holds as well.
Assume $sum_{i=1}^n a_ix_i = 0$ for some scalars $a_i$. If such constant $c > 0$ exists, then for we would have
$$0 = |a_1x_1 + cdots+ a_nx_n| ge c(|a_1| + cdots + |a_n|) $$
which implies $|a_1| + cdots + |a_n| = 0$, or $a_1=cdots=a_n = 0$. Hence ${x_1, ldots, x_n}$ is linearly independent.
Therefore the property of not being able to take linear combinations with large scalars and yield small vectors actually characterizes finite linearly independent sets.
Consider what happens with the standard basis $e_1, ldots, e_n$ of $mathbb{R}^n$:
$$|a_1e_1 + cdots + a_ne_n|_2 = |(a_1, ldots, a_n)|_2 = sqrt{|a_1|^2+cdots + |a_n|^2} ge frac1{sqrt{n}}(|a_1|+cdots+|a_n|)$$
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059133%2ffunctional-analysis-cannot-find-a-linear-combination-that-involves-large-scalar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The fact that you are saying "... a $c$ value is needed", and "Imposing a constant $c$ ..." makes me think that your are looking at this the wrong way round. The theorem says (rewriting it a little bit) that if someone gives us a linear independent set ${x_1, ..., x_n}$ then the ratio $$leftlVert a_1x_1 + a_2x_2 + ... + a_nx_n rightrVert /(lvert a_1 rvert+ ... + lvert a_n rvert)$$ stays bounded away from zero by some definite amount (where we skip the case of all $a_i = 0$).
Simple linear independence says that the ratio never equals zero. The theorem gives a stronger result. If we assume that we take the $c$ in your statement to be as large as possible, i.e. set it equal to the $min$ of the ratio, then we can see $c$ as a measure of how close to linear dependence our set of vectors is.
To me it looks to be an analog of the "angle between two vectors" that doesn't use an inner product. Let's look at the simplest possible example: two unit vectors in $mathbb{R}^2$. In this case the ratio can be shown to equal $$sqrt{dfrac{a_1^2 + a_2^2 + 2a_1a_2 cos(theta)}{a_1^2 + a_2^2}},$$ where $cos( theta) = langle x_1, x_2rangle$. This has a minimum of $sqrt{1+cos theta}$, the value of $c$ in your theorem. Consider what happens to vectors similar to those used by SmileyCraft: $(1,0)$ and $(cos(alpha), sin(alpha))$ for $alpha$ going from $pi/2$ to $pi$. $cos(theta)$ goes from $0$ to $-1$ and your value of $c$ goes from $1$ down to $0$ - as the vectors get closer to being linearly dependent the value of c gets closer to $0$.
$endgroup$
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
add a comment |
$begingroup$
The fact that you are saying "... a $c$ value is needed", and "Imposing a constant $c$ ..." makes me think that your are looking at this the wrong way round. The theorem says (rewriting it a little bit) that if someone gives us a linear independent set ${x_1, ..., x_n}$ then the ratio $$leftlVert a_1x_1 + a_2x_2 + ... + a_nx_n rightrVert /(lvert a_1 rvert+ ... + lvert a_n rvert)$$ stays bounded away from zero by some definite amount (where we skip the case of all $a_i = 0$).
Simple linear independence says that the ratio never equals zero. The theorem gives a stronger result. If we assume that we take the $c$ in your statement to be as large as possible, i.e. set it equal to the $min$ of the ratio, then we can see $c$ as a measure of how close to linear dependence our set of vectors is.
To me it looks to be an analog of the "angle between two vectors" that doesn't use an inner product. Let's look at the simplest possible example: two unit vectors in $mathbb{R}^2$. In this case the ratio can be shown to equal $$sqrt{dfrac{a_1^2 + a_2^2 + 2a_1a_2 cos(theta)}{a_1^2 + a_2^2}},$$ where $cos( theta) = langle x_1, x_2rangle$. This has a minimum of $sqrt{1+cos theta}$, the value of $c$ in your theorem. Consider what happens to vectors similar to those used by SmileyCraft: $(1,0)$ and $(cos(alpha), sin(alpha))$ for $alpha$ going from $pi/2$ to $pi$. $cos(theta)$ goes from $0$ to $-1$ and your value of $c$ goes from $1$ down to $0$ - as the vectors get closer to being linearly dependent the value of c gets closer to $0$.
$endgroup$
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
add a comment |
$begingroup$
The fact that you are saying "... a $c$ value is needed", and "Imposing a constant $c$ ..." makes me think that your are looking at this the wrong way round. The theorem says (rewriting it a little bit) that if someone gives us a linear independent set ${x_1, ..., x_n}$ then the ratio $$leftlVert a_1x_1 + a_2x_2 + ... + a_nx_n rightrVert /(lvert a_1 rvert+ ... + lvert a_n rvert)$$ stays bounded away from zero by some definite amount (where we skip the case of all $a_i = 0$).
Simple linear independence says that the ratio never equals zero. The theorem gives a stronger result. If we assume that we take the $c$ in your statement to be as large as possible, i.e. set it equal to the $min$ of the ratio, then we can see $c$ as a measure of how close to linear dependence our set of vectors is.
To me it looks to be an analog of the "angle between two vectors" that doesn't use an inner product. Let's look at the simplest possible example: two unit vectors in $mathbb{R}^2$. In this case the ratio can be shown to equal $$sqrt{dfrac{a_1^2 + a_2^2 + 2a_1a_2 cos(theta)}{a_1^2 + a_2^2}},$$ where $cos( theta) = langle x_1, x_2rangle$. This has a minimum of $sqrt{1+cos theta}$, the value of $c$ in your theorem. Consider what happens to vectors similar to those used by SmileyCraft: $(1,0)$ and $(cos(alpha), sin(alpha))$ for $alpha$ going from $pi/2$ to $pi$. $cos(theta)$ goes from $0$ to $-1$ and your value of $c$ goes from $1$ down to $0$ - as the vectors get closer to being linearly dependent the value of c gets closer to $0$.
$endgroup$
The fact that you are saying "... a $c$ value is needed", and "Imposing a constant $c$ ..." makes me think that your are looking at this the wrong way round. The theorem says (rewriting it a little bit) that if someone gives us a linear independent set ${x_1, ..., x_n}$ then the ratio $$leftlVert a_1x_1 + a_2x_2 + ... + a_nx_n rightrVert /(lvert a_1 rvert+ ... + lvert a_n rvert)$$ stays bounded away from zero by some definite amount (where we skip the case of all $a_i = 0$).
Simple linear independence says that the ratio never equals zero. The theorem gives a stronger result. If we assume that we take the $c$ in your statement to be as large as possible, i.e. set it equal to the $min$ of the ratio, then we can see $c$ as a measure of how close to linear dependence our set of vectors is.
To me it looks to be an analog of the "angle between two vectors" that doesn't use an inner product. Let's look at the simplest possible example: two unit vectors in $mathbb{R}^2$. In this case the ratio can be shown to equal $$sqrt{dfrac{a_1^2 + a_2^2 + 2a_1a_2 cos(theta)}{a_1^2 + a_2^2}},$$ where $cos( theta) = langle x_1, x_2rangle$. This has a minimum of $sqrt{1+cos theta}$, the value of $c$ in your theorem. Consider what happens to vectors similar to those used by SmileyCraft: $(1,0)$ and $(cos(alpha), sin(alpha))$ for $alpha$ going from $pi/2$ to $pi$. $cos(theta)$ goes from $0$ to $-1$ and your value of $c$ goes from $1$ down to $0$ - as the vectors get closer to being linearly dependent the value of c gets closer to $0$.
answered Jan 2 at 5:21
JonathanZJonathanZ
2,232613
2,232613
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
add a comment |
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
$begingroup$
i find your answer very interesting so thank you -- however, i'm not quite sure on the first part of it regarding the ratio. how are u deriving that ratio and how can cos(theta) = a vector instead of a number?
$endgroup$
– H_1317
Feb 6 at 1:44
add a comment |
$begingroup$
It is interesting to note that the converse holds as well.
Assume $sum_{i=1}^n a_ix_i = 0$ for some scalars $a_i$. If such constant $c > 0$ exists, then for we would have
$$0 = |a_1x_1 + cdots+ a_nx_n| ge c(|a_1| + cdots + |a_n|) $$
which implies $|a_1| + cdots + |a_n| = 0$, or $a_1=cdots=a_n = 0$. Hence ${x_1, ldots, x_n}$ is linearly independent.
Therefore the property of not being able to take linear combinations with large scalars and yield small vectors actually characterizes finite linearly independent sets.
Consider what happens with the standard basis $e_1, ldots, e_n$ of $mathbb{R}^n$:
$$|a_1e_1 + cdots + a_ne_n|_2 = |(a_1, ldots, a_n)|_2 = sqrt{|a_1|^2+cdots + |a_n|^2} ge frac1{sqrt{n}}(|a_1|+cdots+|a_n|)$$
$endgroup$
add a comment |
$begingroup$
It is interesting to note that the converse holds as well.
Assume $sum_{i=1}^n a_ix_i = 0$ for some scalars $a_i$. If such constant $c > 0$ exists, then for we would have
$$0 = |a_1x_1 + cdots+ a_nx_n| ge c(|a_1| + cdots + |a_n|) $$
which implies $|a_1| + cdots + |a_n| = 0$, or $a_1=cdots=a_n = 0$. Hence ${x_1, ldots, x_n}$ is linearly independent.
Therefore the property of not being able to take linear combinations with large scalars and yield small vectors actually characterizes finite linearly independent sets.
Consider what happens with the standard basis $e_1, ldots, e_n$ of $mathbb{R}^n$:
$$|a_1e_1 + cdots + a_ne_n|_2 = |(a_1, ldots, a_n)|_2 = sqrt{|a_1|^2+cdots + |a_n|^2} ge frac1{sqrt{n}}(|a_1|+cdots+|a_n|)$$
$endgroup$
add a comment |
$begingroup$
It is interesting to note that the converse holds as well.
Assume $sum_{i=1}^n a_ix_i = 0$ for some scalars $a_i$. If such constant $c > 0$ exists, then for we would have
$$0 = |a_1x_1 + cdots+ a_nx_n| ge c(|a_1| + cdots + |a_n|) $$
which implies $|a_1| + cdots + |a_n| = 0$, or $a_1=cdots=a_n = 0$. Hence ${x_1, ldots, x_n}$ is linearly independent.
Therefore the property of not being able to take linear combinations with large scalars and yield small vectors actually characterizes finite linearly independent sets.
Consider what happens with the standard basis $e_1, ldots, e_n$ of $mathbb{R}^n$:
$$|a_1e_1 + cdots + a_ne_n|_2 = |(a_1, ldots, a_n)|_2 = sqrt{|a_1|^2+cdots + |a_n|^2} ge frac1{sqrt{n}}(|a_1|+cdots+|a_n|)$$
$endgroup$
It is interesting to note that the converse holds as well.
Assume $sum_{i=1}^n a_ix_i = 0$ for some scalars $a_i$. If such constant $c > 0$ exists, then for we would have
$$0 = |a_1x_1 + cdots+ a_nx_n| ge c(|a_1| + cdots + |a_n|) $$
which implies $|a_1| + cdots + |a_n| = 0$, or $a_1=cdots=a_n = 0$. Hence ${x_1, ldots, x_n}$ is linearly independent.
Therefore the property of not being able to take linear combinations with large scalars and yield small vectors actually characterizes finite linearly independent sets.
Consider what happens with the standard basis $e_1, ldots, e_n$ of $mathbb{R}^n$:
$$|a_1e_1 + cdots + a_ne_n|_2 = |(a_1, ldots, a_n)|_2 = sqrt{|a_1|^2+cdots + |a_n|^2} ge frac1{sqrt{n}}(|a_1|+cdots+|a_n|)$$
answered Jan 2 at 12:23
mechanodroidmechanodroid
29k62648
29k62648
add a comment |
add a comment |
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059133%2ffunctional-analysis-cannot-find-a-linear-combination-that-involves-large-scalar%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
For example $|N(1,0)+N(-1,1/N)|=1$ holds for arbitrarily large $N$.
$endgroup$
– SmileyCraft
Jan 2 at 4:22
$begingroup$
@SmileyCraft, I don't think that applies here. You've got one of the components of your $x_2$ depending on $a_1 and a_2$, while the statement has the ${x_1, x_2}$ fixed.
$endgroup$
– JonathanZ
Jan 2 at 4:26
$begingroup$
yes this is a fixed set of linearly independent vectors
$endgroup$
– H_1317
Jan 2 at 4:30
$begingroup$
@JonathanZ The question was why is the theorem relevant. My example shows that $c$ might need to be arbitrarily small. IMO this shows why the theorem is not trivial.
$endgroup$
– SmileyCraft
Jan 2 at 4:30
1
$begingroup$
@H_1317 Consider the basis ${(1,0),(-1,0.001)}$. Then both vectors have norm approximately $1$, so they're pretty small. However even though $1000$ is a pretty big number, we find $|1000(1,0)+1000(-1,0.001)|=1$, which is again pretty small. We find for $c>1/2000$ that the equation does not hold. However, according to the theorem there does exist some $c>0$ such that the equation always holds.
$endgroup$
– SmileyCraft
Jan 2 at 4:43