Operator norm $ ( ell_2 to ell_1)$












4












$begingroup$


Let $X$ be a finite dimensional normed vector space and $Y$ an arbitrary normed vector space. $ T:X→Y$.



I want to calculate $|T|$ for where $X = K^n$, equipped with the Euclidean norm $|cdot|_2$, $Y := ell_1(mathbb{N})$ and $Tx := (x_1,ldots,x_n,0,0,ldots) in ell_1(mathbb{N})$, for all $x = (x_1,ldots,x_n) in K^n$.



I do not know how to continue
$$ ||T∥_2 = sup limits_{x neq 0} frac{∥Tx∥_1}{∥x∥_2} = sup limits_{x neq 0} frac{∥(x_1,…,x_n,0,0,…)∥_1}{∥(x_1,…,x_n)∥_2} = sup limits_{x neq 0} frac{|x_1|+…+|x_n|}{(|x_1|^2+…+|x_n|^2)^{frac{1}{2}}}= ? $$










share|cite|improve this question











$endgroup$












  • $begingroup$
    I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 20:30












  • $begingroup$
    I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
    $endgroup$
    – Anna Schmitz
    Dec 16 '18 at 20:38


















4












$begingroup$


Let $X$ be a finite dimensional normed vector space and $Y$ an arbitrary normed vector space. $ T:X→Y$.



I want to calculate $|T|$ for where $X = K^n$, equipped with the Euclidean norm $|cdot|_2$, $Y := ell_1(mathbb{N})$ and $Tx := (x_1,ldots,x_n,0,0,ldots) in ell_1(mathbb{N})$, for all $x = (x_1,ldots,x_n) in K^n$.



I do not know how to continue
$$ ||T∥_2 = sup limits_{x neq 0} frac{∥Tx∥_1}{∥x∥_2} = sup limits_{x neq 0} frac{∥(x_1,…,x_n,0,0,…)∥_1}{∥(x_1,…,x_n)∥_2} = sup limits_{x neq 0} frac{|x_1|+…+|x_n|}{(|x_1|^2+…+|x_n|^2)^{frac{1}{2}}}= ? $$










share|cite|improve this question











$endgroup$












  • $begingroup$
    I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 20:30












  • $begingroup$
    I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
    $endgroup$
    – Anna Schmitz
    Dec 16 '18 at 20:38
















4












4








4





$begingroup$


Let $X$ be a finite dimensional normed vector space and $Y$ an arbitrary normed vector space. $ T:X→Y$.



I want to calculate $|T|$ for where $X = K^n$, equipped with the Euclidean norm $|cdot|_2$, $Y := ell_1(mathbb{N})$ and $Tx := (x_1,ldots,x_n,0,0,ldots) in ell_1(mathbb{N})$, for all $x = (x_1,ldots,x_n) in K^n$.



I do not know how to continue
$$ ||T∥_2 = sup limits_{x neq 0} frac{∥Tx∥_1}{∥x∥_2} = sup limits_{x neq 0} frac{∥(x_1,…,x_n,0,0,…)∥_1}{∥(x_1,…,x_n)∥_2} = sup limits_{x neq 0} frac{|x_1|+…+|x_n|}{(|x_1|^2+…+|x_n|^2)^{frac{1}{2}}}= ? $$










share|cite|improve this question











$endgroup$




Let $X$ be a finite dimensional normed vector space and $Y$ an arbitrary normed vector space. $ T:X→Y$.



I want to calculate $|T|$ for where $X = K^n$, equipped with the Euclidean norm $|cdot|_2$, $Y := ell_1(mathbb{N})$ and $Tx := (x_1,ldots,x_n,0,0,ldots) in ell_1(mathbb{N})$, for all $x = (x_1,ldots,x_n) in K^n$.



I do not know how to continue
$$ ||T∥_2 = sup limits_{x neq 0} frac{∥Tx∥_1}{∥x∥_2} = sup limits_{x neq 0} frac{∥(x_1,…,x_n,0,0,…)∥_1}{∥(x_1,…,x_n)∥_2} = sup limits_{x neq 0} frac{|x_1|+…+|x_n|}{(|x_1|^2+…+|x_n|^2)^{frac{1}{2}}}= ? $$







functional-analysis operator-theory norm normed-spaces lp-spaces






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Dec 18 '18 at 15:34







user593746

















asked Dec 16 '18 at 20:16









Anna SchmitzAnna Schmitz

917




917












  • $begingroup$
    I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 20:30












  • $begingroup$
    I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
    $endgroup$
    – Anna Schmitz
    Dec 16 '18 at 20:38




















  • $begingroup$
    I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 20:30












  • $begingroup$
    I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
    $endgroup$
    – Anna Schmitz
    Dec 16 '18 at 20:38


















$begingroup$
I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
$endgroup$
– hyperkahler
Dec 16 '18 at 20:30






$begingroup$
I don't think that it's a good approach, but you can use the fact that $||T|| = sup_{x: ||x||_{2} = 1} {||Tx||_{1}}$ and apply Lagrange multipliers then. In other words you shall find $|x_{1}| + ldots + |x_{n}| rightarrow text{max}$, while $|x_{1}|^{2} + ldots |x_{n}|^{2} = 1$
$endgroup$
– hyperkahler
Dec 16 '18 at 20:30














$begingroup$
I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
$endgroup$
– Anna Schmitz
Dec 16 '18 at 20:38






$begingroup$
I am tried around a lot. I have never worked with Lagrange multiplier, but $||T∥_2 = sup limits_{∥x∥_2 = 1} ∥Tx ||_1 $.
$endgroup$
– Anna Schmitz
Dec 16 '18 at 20:38












2 Answers
2






active

oldest

votes


















1












$begingroup$

I will elaborate on my comment above



Given an operator $T: X rightarrow Y$ where $$X = mathbb{K}^{n}$$
$$Y = l^{1}(mathbb{N})$$ its norm is given by
$$||T||_{op} = sup_{x neq 0}{frac{||Tx||_{1}}{||x||_{2}}} = sup_{||x||_{2} leq 1}{||Tx||_{1}} = sup_{||x||_{2} = 1}{||Tx||_{1}}$$



So we have to maximize $$||Tx||_{1} = |x_{1}| + ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + ldots + x_{n}^{2})^{frac{1}{2}} = 1$$



Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
$$t_{1} + t_{2} + ldots + t_{n} rightarrow text{max}$$
$$t_{1}^{2} + t_{2}^{2} + ldots + t_{n}^{2} = 1$$
$$t_{i} geq 0, forall i = 1, ldots, n$$



The Lagrangian for the problem is:
$$L = (t_{1} + ldots + t_{n}) - lambda (t_{1}^{2} + ldots + t_{n}^{2} - 1)$$
and $lambda$ stays for the lagrange multiplier.



The necessary extremum condition implies:
$$frac{partial L}{partial t_{i}} := 1 - lambda(2t_{i}) = 0$$
thus $t_{i} = frac{1}{2 lambda}$. Since $t_{i} geq 0$ it follows that $lambda > 0$.



The stationary point is $x = (frac{1}{2 lambda}, ldots, frac{1}{2 lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.



$||x||_{2} = 1$ is equivalent to
$$ n cdot frac{1}{4 lambda^{2}} = 1$$ from which we get
$lambda = frac{sqrt{n}}{2}$
(note that we shall choose it positive due to the restrictions mentioned above)



Thus $t_{i} = frac{1}{sqrt{n}}$ and the maximum equals $$text{max} = n cdot frac{1}{sqrt{n}} = sqrt{n}$$






share|cite|improve this answer











$endgroup$









  • 1




    $begingroup$
    Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
    $endgroup$
    – Rebellos
    Dec 16 '18 at 21:16










  • $begingroup$
    @Rebellos i've made a mistake, please, check the answer for the updates
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 21:25



















1












$begingroup$

Using Cauchy-Schwarz inequality, we have
$$
sum_{i=1}^nleftlvert x_irightrvert=sum_{i=1}^nleftlvert x_irightrvertcdot 1leqslant left(sum_{i=1}^nx_i^2right)^{1/2}left(sum_{i=1}^n1right)^{1/2}=sqrt nleft(sum_{i=1}^nx_i^2right)^{1/2}
$$

hence $leftlVert TrightrVertleqslant sqrt n$.
For the opposite inequality, look at the case where $x_i=1$ for all $iin{1,dots,n}$.






share|cite|improve this answer











$endgroup$













    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3043100%2foperator-norm-ell-2-to-ell-1%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    1












    $begingroup$

    I will elaborate on my comment above



    Given an operator $T: X rightarrow Y$ where $$X = mathbb{K}^{n}$$
    $$Y = l^{1}(mathbb{N})$$ its norm is given by
    $$||T||_{op} = sup_{x neq 0}{frac{||Tx||_{1}}{||x||_{2}}} = sup_{||x||_{2} leq 1}{||Tx||_{1}} = sup_{||x||_{2} = 1}{||Tx||_{1}}$$



    So we have to maximize $$||Tx||_{1} = |x_{1}| + ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + ldots + x_{n}^{2})^{frac{1}{2}} = 1$$



    Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
    $$t_{1} + t_{2} + ldots + t_{n} rightarrow text{max}$$
    $$t_{1}^{2} + t_{2}^{2} + ldots + t_{n}^{2} = 1$$
    $$t_{i} geq 0, forall i = 1, ldots, n$$



    The Lagrangian for the problem is:
    $$L = (t_{1} + ldots + t_{n}) - lambda (t_{1}^{2} + ldots + t_{n}^{2} - 1)$$
    and $lambda$ stays for the lagrange multiplier.



    The necessary extremum condition implies:
    $$frac{partial L}{partial t_{i}} := 1 - lambda(2t_{i}) = 0$$
    thus $t_{i} = frac{1}{2 lambda}$. Since $t_{i} geq 0$ it follows that $lambda > 0$.



    The stationary point is $x = (frac{1}{2 lambda}, ldots, frac{1}{2 lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.



    $||x||_{2} = 1$ is equivalent to
    $$ n cdot frac{1}{4 lambda^{2}} = 1$$ from which we get
    $lambda = frac{sqrt{n}}{2}$
    (note that we shall choose it positive due to the restrictions mentioned above)



    Thus $t_{i} = frac{1}{sqrt{n}}$ and the maximum equals $$text{max} = n cdot frac{1}{sqrt{n}} = sqrt{n}$$






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
      $endgroup$
      – Rebellos
      Dec 16 '18 at 21:16










    • $begingroup$
      @Rebellos i've made a mistake, please, check the answer for the updates
      $endgroup$
      – hyperkahler
      Dec 16 '18 at 21:25
















    1












    $begingroup$

    I will elaborate on my comment above



    Given an operator $T: X rightarrow Y$ where $$X = mathbb{K}^{n}$$
    $$Y = l^{1}(mathbb{N})$$ its norm is given by
    $$||T||_{op} = sup_{x neq 0}{frac{||Tx||_{1}}{||x||_{2}}} = sup_{||x||_{2} leq 1}{||Tx||_{1}} = sup_{||x||_{2} = 1}{||Tx||_{1}}$$



    So we have to maximize $$||Tx||_{1} = |x_{1}| + ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + ldots + x_{n}^{2})^{frac{1}{2}} = 1$$



    Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
    $$t_{1} + t_{2} + ldots + t_{n} rightarrow text{max}$$
    $$t_{1}^{2} + t_{2}^{2} + ldots + t_{n}^{2} = 1$$
    $$t_{i} geq 0, forall i = 1, ldots, n$$



    The Lagrangian for the problem is:
    $$L = (t_{1} + ldots + t_{n}) - lambda (t_{1}^{2} + ldots + t_{n}^{2} - 1)$$
    and $lambda$ stays for the lagrange multiplier.



    The necessary extremum condition implies:
    $$frac{partial L}{partial t_{i}} := 1 - lambda(2t_{i}) = 0$$
    thus $t_{i} = frac{1}{2 lambda}$. Since $t_{i} geq 0$ it follows that $lambda > 0$.



    The stationary point is $x = (frac{1}{2 lambda}, ldots, frac{1}{2 lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.



    $||x||_{2} = 1$ is equivalent to
    $$ n cdot frac{1}{4 lambda^{2}} = 1$$ from which we get
    $lambda = frac{sqrt{n}}{2}$
    (note that we shall choose it positive due to the restrictions mentioned above)



    Thus $t_{i} = frac{1}{sqrt{n}}$ and the maximum equals $$text{max} = n cdot frac{1}{sqrt{n}} = sqrt{n}$$






    share|cite|improve this answer











    $endgroup$









    • 1




      $begingroup$
      Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
      $endgroup$
      – Rebellos
      Dec 16 '18 at 21:16










    • $begingroup$
      @Rebellos i've made a mistake, please, check the answer for the updates
      $endgroup$
      – hyperkahler
      Dec 16 '18 at 21:25














    1












    1








    1





    $begingroup$

    I will elaborate on my comment above



    Given an operator $T: X rightarrow Y$ where $$X = mathbb{K}^{n}$$
    $$Y = l^{1}(mathbb{N})$$ its norm is given by
    $$||T||_{op} = sup_{x neq 0}{frac{||Tx||_{1}}{||x||_{2}}} = sup_{||x||_{2} leq 1}{||Tx||_{1}} = sup_{||x||_{2} = 1}{||Tx||_{1}}$$



    So we have to maximize $$||Tx||_{1} = |x_{1}| + ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + ldots + x_{n}^{2})^{frac{1}{2}} = 1$$



    Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
    $$t_{1} + t_{2} + ldots + t_{n} rightarrow text{max}$$
    $$t_{1}^{2} + t_{2}^{2} + ldots + t_{n}^{2} = 1$$
    $$t_{i} geq 0, forall i = 1, ldots, n$$



    The Lagrangian for the problem is:
    $$L = (t_{1} + ldots + t_{n}) - lambda (t_{1}^{2} + ldots + t_{n}^{2} - 1)$$
    and $lambda$ stays for the lagrange multiplier.



    The necessary extremum condition implies:
    $$frac{partial L}{partial t_{i}} := 1 - lambda(2t_{i}) = 0$$
    thus $t_{i} = frac{1}{2 lambda}$. Since $t_{i} geq 0$ it follows that $lambda > 0$.



    The stationary point is $x = (frac{1}{2 lambda}, ldots, frac{1}{2 lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.



    $||x||_{2} = 1$ is equivalent to
    $$ n cdot frac{1}{4 lambda^{2}} = 1$$ from which we get
    $lambda = frac{sqrt{n}}{2}$
    (note that we shall choose it positive due to the restrictions mentioned above)



    Thus $t_{i} = frac{1}{sqrt{n}}$ and the maximum equals $$text{max} = n cdot frac{1}{sqrt{n}} = sqrt{n}$$






    share|cite|improve this answer











    $endgroup$



    I will elaborate on my comment above



    Given an operator $T: X rightarrow Y$ where $$X = mathbb{K}^{n}$$
    $$Y = l^{1}(mathbb{N})$$ its norm is given by
    $$||T||_{op} = sup_{x neq 0}{frac{||Tx||_{1}}{||x||_{2}}} = sup_{||x||_{2} leq 1}{||Tx||_{1}} = sup_{||x||_{2} = 1}{||Tx||_{1}}$$



    So we have to maximize $$||Tx||_{1} = |x_{1}| + ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + ldots + x_{n}^{2})^{frac{1}{2}} = 1$$



    Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
    $$t_{1} + t_{2} + ldots + t_{n} rightarrow text{max}$$
    $$t_{1}^{2} + t_{2}^{2} + ldots + t_{n}^{2} = 1$$
    $$t_{i} geq 0, forall i = 1, ldots, n$$



    The Lagrangian for the problem is:
    $$L = (t_{1} + ldots + t_{n}) - lambda (t_{1}^{2} + ldots + t_{n}^{2} - 1)$$
    and $lambda$ stays for the lagrange multiplier.



    The necessary extremum condition implies:
    $$frac{partial L}{partial t_{i}} := 1 - lambda(2t_{i}) = 0$$
    thus $t_{i} = frac{1}{2 lambda}$. Since $t_{i} geq 0$ it follows that $lambda > 0$.



    The stationary point is $x = (frac{1}{2 lambda}, ldots, frac{1}{2 lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.



    $||x||_{2} = 1$ is equivalent to
    $$ n cdot frac{1}{4 lambda^{2}} = 1$$ from which we get
    $lambda = frac{sqrt{n}}{2}$
    (note that we shall choose it positive due to the restrictions mentioned above)



    Thus $t_{i} = frac{1}{sqrt{n}}$ and the maximum equals $$text{max} = n cdot frac{1}{sqrt{n}} = sqrt{n}$$







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Dec 16 '18 at 21:24

























    answered Dec 16 '18 at 20:52









    hyperkahlerhyperkahler

    1,483714




    1,483714








    • 1




      $begingroup$
      Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
      $endgroup$
      – Rebellos
      Dec 16 '18 at 21:16










    • $begingroup$
      @Rebellos i've made a mistake, please, check the answer for the updates
      $endgroup$
      – hyperkahler
      Dec 16 '18 at 21:25














    • 1




      $begingroup$
      Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
      $endgroup$
      – Rebellos
      Dec 16 '18 at 21:16










    • $begingroup$
      @Rebellos i've made a mistake, please, check the answer for the updates
      $endgroup$
      – hyperkahler
      Dec 16 '18 at 21:25








    1




    1




    $begingroup$
    Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
    $endgroup$
    – Rebellos
    Dec 16 '18 at 21:16




    $begingroup$
    Hi, how does $n cdot 1/n = sqrt{n}$ ? Also, nice alternative thinking but that's a bit off the road of functinal analysis.
    $endgroup$
    – Rebellos
    Dec 16 '18 at 21:16












    $begingroup$
    @Rebellos i've made a mistake, please, check the answer for the updates
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 21:25




    $begingroup$
    @Rebellos i've made a mistake, please, check the answer for the updates
    $endgroup$
    – hyperkahler
    Dec 16 '18 at 21:25











    1












    $begingroup$

    Using Cauchy-Schwarz inequality, we have
    $$
    sum_{i=1}^nleftlvert x_irightrvert=sum_{i=1}^nleftlvert x_irightrvertcdot 1leqslant left(sum_{i=1}^nx_i^2right)^{1/2}left(sum_{i=1}^n1right)^{1/2}=sqrt nleft(sum_{i=1}^nx_i^2right)^{1/2}
    $$

    hence $leftlVert TrightrVertleqslant sqrt n$.
    For the opposite inequality, look at the case where $x_i=1$ for all $iin{1,dots,n}$.






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      Using Cauchy-Schwarz inequality, we have
      $$
      sum_{i=1}^nleftlvert x_irightrvert=sum_{i=1}^nleftlvert x_irightrvertcdot 1leqslant left(sum_{i=1}^nx_i^2right)^{1/2}left(sum_{i=1}^n1right)^{1/2}=sqrt nleft(sum_{i=1}^nx_i^2right)^{1/2}
      $$

      hence $leftlVert TrightrVertleqslant sqrt n$.
      For the opposite inequality, look at the case where $x_i=1$ for all $iin{1,dots,n}$.






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        Using Cauchy-Schwarz inequality, we have
        $$
        sum_{i=1}^nleftlvert x_irightrvert=sum_{i=1}^nleftlvert x_irightrvertcdot 1leqslant left(sum_{i=1}^nx_i^2right)^{1/2}left(sum_{i=1}^n1right)^{1/2}=sqrt nleft(sum_{i=1}^nx_i^2right)^{1/2}
        $$

        hence $leftlVert TrightrVertleqslant sqrt n$.
        For the opposite inequality, look at the case where $x_i=1$ for all $iin{1,dots,n}$.






        share|cite|improve this answer











        $endgroup$



        Using Cauchy-Schwarz inequality, we have
        $$
        sum_{i=1}^nleftlvert x_irightrvert=sum_{i=1}^nleftlvert x_irightrvertcdot 1leqslant left(sum_{i=1}^nx_i^2right)^{1/2}left(sum_{i=1}^n1right)^{1/2}=sqrt nleft(sum_{i=1}^nx_i^2right)^{1/2}
        $$

        hence $leftlVert TrightrVertleqslant sqrt n$.
        For the opposite inequality, look at the case where $x_i=1$ for all $iin{1,dots,n}$.







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Dec 18 '18 at 15:41

























        answered Dec 18 '18 at 10:56









        Davide GiraudoDavide Giraudo

        127k16151265




        127k16151265






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3043100%2foperator-norm-ell-2-to-ell-1%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Probability when a professor distributes a quiz and homework assignment to a class of n students.

            Aardman Animations

            Are they similar matrix