Understanding the definition of tensors as multilinear maps












7












$begingroup$


The question arises from the definition of the space of $(p,q)$ tensors as the set of multilinear maps from the Cartesian product of elements of a vector space and its dual onto the field, equipped with addition and s-multiplication rules, given in this series at this point in time as follows:



A $(p,q)$ tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$



$$T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow Ktag 1$$



The $(p,q)$ TENSOR SPACE is defined as a set:



$$begin{align}T^p_q,V &= underset{p}{underbrace{Vcolor{darkorange}{otimes}cdotscolor{darkorange}{otimes} V}} color{darkorange}{otimes} underset{q}{underbrace{V^*color{darkorange}{otimes}cdotscolor{darkorange}{otimes} V^*}}:={T, |, T, text{ is a (p,q) tensor}}tag2\[3ex]&={T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimes cdots times V}} overset{sim}rightarrow K}end{align}tag3$$



This expression symbolizing the set of all tensors where $T$ is $(p,q)$, equipped this with pointwise addition and s-multiplication.



This is (not surprisingly) consistent with the Wikipedia definition of tensors as multilinear maps.





QUESTION:



I don't understand why in Eq. (2), $p$ seems to index (and is equal to) the number of elements in the vector space $V$ entered in, while in Eq. (3), the same $p$ is indexing the number of elements of the dual space $V^*.$



Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of vectors $V$ to produce real numbers that are later multiplied?



If so, what are the $p$ $V$ elements in $Votimes Votimescdots$ in Eq. 2 doing? Are they vectors in $V$ "waiting" for a functional to be mapped into $K$? And if so where is that function defined? I guess that since we are defining a set, it can be any functional?



Is this the correct interpretation?



And how are the $color{darkorange}{otimes}$ and $times$ operations to be interpreted in these equations?










share|cite|improve this question











$endgroup$












  • $begingroup$
    @Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 9:56










  • $begingroup$
    @Peter Franek Thank you for pointing this out. I believe it should be OK now.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:14










  • $begingroup$
    @PeterFranek It's corrected now. Thanks.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:58
















7












$begingroup$


The question arises from the definition of the space of $(p,q)$ tensors as the set of multilinear maps from the Cartesian product of elements of a vector space and its dual onto the field, equipped with addition and s-multiplication rules, given in this series at this point in time as follows:



A $(p,q)$ tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$



$$T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow Ktag 1$$



The $(p,q)$ TENSOR SPACE is defined as a set:



$$begin{align}T^p_q,V &= underset{p}{underbrace{Vcolor{darkorange}{otimes}cdotscolor{darkorange}{otimes} V}} color{darkorange}{otimes} underset{q}{underbrace{V^*color{darkorange}{otimes}cdotscolor{darkorange}{otimes} V^*}}:={T, |, T, text{ is a (p,q) tensor}}tag2\[3ex]&={T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimes cdots times V}} overset{sim}rightarrow K}end{align}tag3$$



This expression symbolizing the set of all tensors where $T$ is $(p,q)$, equipped this with pointwise addition and s-multiplication.



This is (not surprisingly) consistent with the Wikipedia definition of tensors as multilinear maps.





QUESTION:



I don't understand why in Eq. (2), $p$ seems to index (and is equal to) the number of elements in the vector space $V$ entered in, while in Eq. (3), the same $p$ is indexing the number of elements of the dual space $V^*.$



Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of vectors $V$ to produce real numbers that are later multiplied?



If so, what are the $p$ $V$ elements in $Votimes Votimescdots$ in Eq. 2 doing? Are they vectors in $V$ "waiting" for a functional to be mapped into $K$? And if so where is that function defined? I guess that since we are defining a set, it can be any functional?



Is this the correct interpretation?



And how are the $color{darkorange}{otimes}$ and $times$ operations to be interpreted in these equations?










share|cite|improve this question











$endgroup$












  • $begingroup$
    @Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 9:56










  • $begingroup$
    @Peter Franek Thank you for pointing this out. I believe it should be OK now.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:14










  • $begingroup$
    @PeterFranek It's corrected now. Thanks.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:58














7












7








7


13



$begingroup$


The question arises from the definition of the space of $(p,q)$ tensors as the set of multilinear maps from the Cartesian product of elements of a vector space and its dual onto the field, equipped with addition and s-multiplication rules, given in this series at this point in time as follows:



A $(p,q)$ tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$



$$T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow Ktag 1$$



The $(p,q)$ TENSOR SPACE is defined as a set:



$$begin{align}T^p_q,V &= underset{p}{underbrace{Vcolor{darkorange}{otimes}cdotscolor{darkorange}{otimes} V}} color{darkorange}{otimes} underset{q}{underbrace{V^*color{darkorange}{otimes}cdotscolor{darkorange}{otimes} V^*}}:={T, |, T, text{ is a (p,q) tensor}}tag2\[3ex]&={T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimes cdots times V}} overset{sim}rightarrow K}end{align}tag3$$



This expression symbolizing the set of all tensors where $T$ is $(p,q)$, equipped this with pointwise addition and s-multiplication.



This is (not surprisingly) consistent with the Wikipedia definition of tensors as multilinear maps.





QUESTION:



I don't understand why in Eq. (2), $p$ seems to index (and is equal to) the number of elements in the vector space $V$ entered in, while in Eq. (3), the same $p$ is indexing the number of elements of the dual space $V^*.$



Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of vectors $V$ to produce real numbers that are later multiplied?



If so, what are the $p$ $V$ elements in $Votimes Votimescdots$ in Eq. 2 doing? Are they vectors in $V$ "waiting" for a functional to be mapped into $K$? And if so where is that function defined? I guess that since we are defining a set, it can be any functional?



Is this the correct interpretation?



And how are the $color{darkorange}{otimes}$ and $times$ operations to be interpreted in these equations?










share|cite|improve this question











$endgroup$




The question arises from the definition of the space of $(p,q)$ tensors as the set of multilinear maps from the Cartesian product of elements of a vector space and its dual onto the field, equipped with addition and s-multiplication rules, given in this series at this point in time as follows:



A $(p,q)$ tensor, $T$ is a MULTILINEAR MAP that takes $p$ copies of $V^*$ and $q$ copies of $V$ and maps multilinearly (linear in each entry) to $k:$



$$T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow Ktag 1$$



The $(p,q)$ TENSOR SPACE is defined as a set:



$$begin{align}T^p_q,V &= underset{p}{underbrace{Vcolor{darkorange}{otimes}cdotscolor{darkorange}{otimes} V}} color{darkorange}{otimes} underset{q}{underbrace{V^*color{darkorange}{otimes}cdotscolor{darkorange}{otimes} V^*}}:={T, |, T, text{ is a (p,q) tensor}}tag2\[3ex]&={T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimes cdots times V}} overset{sim}rightarrow K}end{align}tag3$$



This expression symbolizing the set of all tensors where $T$ is $(p,q)$, equipped this with pointwise addition and s-multiplication.



This is (not surprisingly) consistent with the Wikipedia definition of tensors as multilinear maps.





QUESTION:



I don't understand why in Eq. (2), $p$ seems to index (and is equal to) the number of elements in the vector space $V$ entered in, while in Eq. (3), the same $p$ is indexing the number of elements of the dual space $V^*.$



Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of vectors $V$ to produce real numbers that are later multiplied?



If so, what are the $p$ $V$ elements in $Votimes Votimescdots$ in Eq. 2 doing? Are they vectors in $V$ "waiting" for a functional to be mapped into $K$? And if so where is that function defined? I guess that since we are defining a set, it can be any functional?



Is this the correct interpretation?



And how are the $color{darkorange}{otimes}$ and $times$ operations to be interpreted in these equations?







tensor-products tensors






share|cite|improve this question















share|cite|improve this question













share|cite|improve this question




share|cite|improve this question








edited Feb 14 '17 at 3:02







user940

















asked Feb 10 '17 at 18:03









Antoni ParelladaAntoni Parellada

3,10421341




3,10421341












  • $begingroup$
    @Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 9:56










  • $begingroup$
    @Peter Franek Thank you for pointing this out. I believe it should be OK now.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:14










  • $begingroup$
    @PeterFranek It's corrected now. Thanks.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:58


















  • $begingroup$
    @Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 9:56










  • $begingroup$
    @Peter Franek Thank you for pointing this out. I believe it should be OK now.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:14










  • $begingroup$
    @PeterFranek It's corrected now. Thanks.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 11:58
















$begingroup$
@Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
$endgroup$
– Antoni Parellada
Feb 13 '17 at 9:56




$begingroup$
@Peter Franek As my understanding improves, thanks to good answers like yours, I hope the edits to the OP make the question more precise. Is it less confusing now?
$endgroup$
– Antoni Parellada
Feb 13 '17 at 9:56












$begingroup$
@Peter Franek Thank you for pointing this out. I believe it should be OK now.
$endgroup$
– Antoni Parellada
Feb 13 '17 at 11:14




$begingroup$
@Peter Franek Thank you for pointing this out. I believe it should be OK now.
$endgroup$
– Antoni Parellada
Feb 13 '17 at 11:14












$begingroup$
@PeterFranek It's corrected now. Thanks.
$endgroup$
– Antoni Parellada
Feb 13 '17 at 11:58




$begingroup$
@PeterFranek It's corrected now. Thanks.
$endgroup$
– Antoni Parellada
Feb 13 '17 at 11:58










6 Answers
6






active

oldest

votes


















7





+100







$begingroup$

Let's first look at a very special type of tensor, namely the (0,1) tensor. What is it? Well, it is the tensor product of $0$ copies of members of $V$ and one copy of members of $V^*$. That is, it is a member of $V^*$.



But what is a member of $V^*$? Well, by the very definition of $V^*$ is is a linear function $phi:Vto K$. Let's write this explicitly:
$$T^0_1V = V^* = {phi:Vto K|phi text{ is linear}}$$
You see, already at this point, where we didn't even use a tensor product, we get a $V^*$ on one side, and a $V$ on the other, simply by inserting the definition of $V^*$.



From this, it is obvious why $(0,q)$-tensors have $q$ copies of $V^*$ in the tensor product $(2)$, but $q$ copies of $V$ in the domain of the multilinear function in $(3)$.



OK, but why do you have a $V^*$ in the map in $(3)$ for each factor $V$ in the tensor product? After all, vectors are not functions, are they?



Well, in some sense they are: There is a natural linear map from $V$ to its double dual $V^{**}$, that is, the set of linear functions from $V^*$ to $K$. Indeed, for finite dimensional vector spaces, you even have that $V^{**} cong V$. This natural map is defined by the condition that applying the image of $v$ to $phiin V^*$ gives the same value as applying $phi$ to $v$. I suspect that the lecture assumes finite dimensional vector spaces. In that case, you can identify $V$ with $V^{**}$, and therefore you get
$$T^1_0V = V = V^{**} = {T:V^*to K|T text{ is linear}}$$
Here the second equality is exactly that identification.



Now again it should be obvious why $p$ copies of $V$ in the tensor product $(2)$ give $p$ factors of $V^*$ for the domain of the multilinear functions in $(3)$.



Edit: On request in the comments, something about the relations of those terms to the Kronecker product.



The tensor product $color{darkorange}{otimes}$ in $(2)$ is a tensor product not of (co)vectors, but of (co)vector spaces. The result of that tensor product describes not one tensor, but the set of all tensors of a given type. The tensors are then elements of the corresponding set. And given a basis of $V$, the tensors can then be specified by giving their coefficients in that basis.



This is completely analogous to the vector space itself. We have the vector space, $V$, this vector space contains vectors $vin V$, and given a basis ${e_i}$ of $V$, we can write the vector in components, $v = sum_i v^i e_i$.



Similarly for $V^*$, we can write each member $phiin V^*$ in the dual basis $omega^i$ (defined by $omega^i(e_j)=delta^i_j$) as $sum_i phi_i omega^i$. An alternative way to get the components $phi_i$ is to notice that $phi(e_k) = sum_i phi_i omega^i(e_k) = sum_i phi_i delta^i_k = phi_k$. That is, the components of the covector are just the function values at the basis vectors.



This way one also sees immediately that $phi(v) = sum_i phi(v^i e_i) = sum_i v^iphi(e_i) = sum_i v^i phi_i$, which is sort of like an inner product, but not exactly, because it behaves differently at change of basis.



Now let's look at a $(0,2)$ tensor, that is, a bilinear function $f:Vtimes Vto K$. Note that $fin V^*color{darkorange}{otimes} V^*$, as $V^*color{darkorange}{otimes} V^*$ is by definition the set of all such functions (see eq. $(3)$). Now by being a bilinear function, one again only needs to know the values at the basis vectors, as
$$f(v,w) = f(sum_i v^i e_i, sum_j w^j e_j) = sum_{i,j}v^i w^j f(e_i,e_j)$$
and therefore we can define as components $f_{ij} = f(e_i,e_j)$ and get $f(v,w)=sum_{i,j}f_{ij}v^iw^j$.



This goes also for general tensors: A single tensor $Tin T^p_qV$ is a multilinear function $T:(V^*)^ptimes V^qto K$, and it is completely determined by the values you get when inserting basis vectors and basis covectors everywhere, giving the components
$$T^{ildots j}_{kldots l}=T(underbrace{omega^i,ldots,omega^j}_{p},underbrace{e_k,ldots,e_l}_{q})$$



OK, we now have components, but we have still not defined the tensor product of tensors. But that is actually quite easy:



Be $xin T^p_qV$, and $yin T^r_sV$. That is, $x$ is a function that takes $p$ covectors and $q$ vectors, and gives a scalar, while $y$ takes $r$ covectors and $s$ vectors to a scalar. Then the tensor product $xcolor{blue}{otimes} y$ is a function that takes $p+r$ covectors and $q+s$ vectors, feeds the first $p$ covectors and the first $q$ vectors to $x$, and the remaining $r$ covectors and $s$ vectors to $y$, and them multiplies the result. That is,
$$(xcolor{blue}{otimes} y)(underbrace{kappa,ldots,lambda,mu,ldots,nu}_{p+r},underbrace{u,ldots,v,w,ldots,x}_{q+s}) = x(underbrace{kappa,ldots,lambda}_p,underbrace{u,ldots,v}_q)cdot y(underbrace{mu,ldots,nu}_{r},underbrace{w,ldots,x}_{s})$$
It is not hard to check that this function is indeed also multilinear, and therefore $xcolor{blue}{otimes} yin T^{p+r}_{q+s}V$.



And now finally, we get to the question what the components of $xcolor{blue}{otimes} y$ are. Well, the components of $xcolor{blue}{otimes} y$ are just the function values when inserting basis vectors and basis covectors, and when you do that and use the definition of the tensor product, you find that indeed, the components of the tensor product are the Kronecker product of the components of the factors.



Also, it can be shown that $T^p_qV$ is a vector space in its own right, and therefore the $(p,q)$-tensors can be written as the linear combination of a basis that is $1$ exactly for one combination of basis vectors and basis covectors and $0$ for all other combinations. However it can then easily be seen that this is just the tensor product of the corresponding dual covectors/vectors. Since furthermore in that basis, the coefficients on the basis vectors are just the components of the tensor as introduced before, we finally arrive at the formula
$$T = sum T^{ildots j}_{kldots l}underbrace{e_icolor{blue}{otimes}dotscolor{blue}{otimes} e_j}_{p}color{blue}{otimes}underbrace{omega^kcolor{blue}{otimes}dots,color{blue}{otimes};omega^l}_{q}$$






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:25










  • $begingroup$
    Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
    $endgroup$
    – celtschk
    Feb 13 '17 at 0:34










  • $begingroup$
    Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:50








  • 1




    $begingroup$
    @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
    $endgroup$
    – celtschk
    Feb 13 '17 at 21:04






  • 1




    $begingroup$
    @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
    $endgroup$
    – celtschk
    Feb 20 '17 at 20:51





















5












$begingroup$

Taking into account what has been said here, recall that if ${e_1,...,e_n}$ is a basis of $V$ and ${omega^1,...,omega^n}$ its dual basis, then any $Tinmathcal T^{(p,q)}$ can be written:




begin{equation} T=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} end{equation}




Think about the simple cases. Tensors of type $(0,1)$ are linear maps $T:Vrightarrow K$, that is, elements of $V^*$. Now, tensors of type $(0,2)$ are usually called bilinear forms. They are multilinear maps $T:Vtimes V rightarrow K$. Now, the easiest way of building a $(0,2)$-tensor is picking two covectors $f_1$ and $f_2$, and use tensorial product:
$$T=f_1otimes f_2$$
(note that I wrote sub-indices, there is no difference between writing super-indices, and it's advisable to do it in the general case, but for $mathcal T^{(0,2)}(V)$ it's all right).



Now, using the first equation, every $(0,2)$-tensor is like this:
$$T=sum lambda_{ij}  omega^iotimes omega^j$$
(recall that ${omega^1,...,omega^n})$ is a basis of $V^*$.
So now you can see that $mathcal T^{(0,2)}(V)=V^*otimes V^*$.



In the general case, all tensors are sums of multiples of tensors like this:
$$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} qquad qquad (1)$$



so this is why $mathcal T^{(p,q)}(V)=Votimes overset{p text{ times}}{...} otimes V otimes V^*otimes overset{q text{ times}}{...} otimes V^*$





Let us focus on tensors like $(1)$.
If $v_1, ..., v_qin V$ and $xi^1, ... xi^pin V^*$, then



$$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q}(xi_1, ... xi_q,v_1, ..., v_p,)=e_{i_1}(xi^1)cdots e_{i_p}(xi^p) cdot omega^{j_1}(v_1)cdots omega^{j_q}(v_q)$$



as I suppose you know. So, in a certain way, you could say that these $omega$'s are waiting for a vector to produce numbers that later will be multiplied.



Now, forget that you know the elements of $V^*$ are functionals. They now are, simply, vectors, (as they are elements of a vector space), and its dual is $V^{**}=V$. So the $e$'s, considered as functionals over $V^*$ are also waiting for a vector (but now a vector is an element of $V^*$) to produce numbers that will be multiplied.



The difficulties with this issues is that one has to keep in mind the different points of view. It is true that $V^*$ is the dual of $V$, and the elements of $V^*$ are functionals over $V$. But, as $V^*$ is again a vector space, we can forget for a moment about $V$, considering the elements of $V^*$ as simple vectors, and take the dual $V^{**}$. So, in the end, vectors and covectors behave in really similar ways.



Regarding the isomorphism $phi: Vto V^{**}$, if we pick $vin V$. So now, $phi(v)$ is a functional in $V^*$, and to define who $v$ is, we have to say what is the value of $phi(v)(omega)in K$ for every $omegain V^*$. But we know $omega$ is a functional on $V$, so $omega(v)in K$. So we use this fact to define $phi(v)$:




$$phi(v)(omega)=omega(v)$$




In the end, we would end writing $phi(v)=v$ so in the end, we get $$v(omega)=omega(v)$$
and this is why the theorem that states the isomorphism $Vto V^{**}$ is called reflexivity theorem. When you see $v(omega)$, just think it is the same as $omega(v)$.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 22:02










  • $begingroup$
    I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 12 '17 at 22:07










  • $begingroup$
    I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53






  • 1




    $begingroup$
    Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53












  • $begingroup$
    Yes, exactly. I'm editing the answer to better explain the second paragraph.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 13 '17 at 21:53



















2












$begingroup$

The space of linear maps $f:Vto k$ is $V^*$. As such, we can show that the space of multilinear maps from $Pi_i^n M_i$ to $k$ will be exactly $M_1^*otimes M_2^*otimesdots otimes M_n^*$, which is exactly what is going on here.



So the space of maps from $Pi_1^p Vtimes Pi_1^q V^*to k$ would be $bigotimes_1^p V^*otimes bigotimes_1^q V^{**}$



Since this definition seems to implicitly be working with finite dimensional space $V^{**}$ can be replaced with $V$.






share|cite|improve this answer









$endgroup$













  • $begingroup$
    Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:26








  • 2




    $begingroup$
    The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
    $endgroup$
    – Rasta Mouse
    Feb 10 '17 at 18:40










  • $begingroup$
    OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:44






  • 1




    $begingroup$
    Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
    $endgroup$
    – Rasta Mouse
    Feb 12 '17 at 23:57



















2












$begingroup$

I will probably not add much new, just use a slightly different language.
A $V^*$-terms in equation $(2)$ for $(p,q)=(0,1)$ clearly represents a linear maps $Vto K$ (in eq $(3)$)). Similarly, $V$ (in equation $(2)$ with $(p,q)=(1,0)$) can be canonically identified with $V^{**}$, elements of which are by definition linear maps $V^*to K$ (that is $(3)$ for $(p,q)=(1,0)$).



Using the notation common in physics, you can simply assume a fixed basis and represent vectors in coordinates like $v^i:=(v^1,ldots, v^n)$ and duals like $alpha_i:=(alpha_1,ldots,alpha_n)$ and their pairing $alpha_i v^i:=sum_i alpha_i v^iin K$. Then you can ask "what is $v^i$ (a vector!) doing?" Answer: it takes $alpha_i$ and returns $alpha_i v^i$. Similarly, what is $alpha_i$ (a dual!) doing? Answer: it takes $v^i$ and returns $alpha_i v^i$. This construction of course directly generalizes. What is $T^{i_1ldots, i_p}_{j_1ldots j_q}$ doing? It takes $(v_1)^{j_1},ldots, (alpha^p)_{i_p}$ --- in other words, $q$ vectors and $p$ covectors --- and returns the corresponding sum
$$
sum_{i_u,j_v} T_{j_1ldots}^{i_1ldots} ,,(v_1)^{j_1}ldots (alpha^p)_{i_p}
$$
The reason is that you always want to pair upper- (vector-) indices with lower indices and vice versa.



Further, you ask




Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of
vectors $V$ to produce real numbers that are later multiplied?




Here you have to be more careful. For convenience, I will restrict myself to $V^*otimes V^*$ as it immediately generalizes. In some sense, you are right and the object $T_{ij}$ (an element of $V^*otimes V^*$) is "waiting" for two vectors $v,w$ on which it will be evaluated to $sum T_{ij} v^i w^j$.



However, if you have an element of $V^*otimes V^*$, there is no way to identify "two elements" of $V^*$ that it "represents". Note that you have a natural map $V^*times V^*to V^*otimes V^*$ that to two elements $(alpha,beta)$ assigns $alphaotimesbeta$ which acts on vectors via
$$
(alphaotimesbeta)(v,w):=alpha(v) beta(w).
$$
But




  1. This map is not surjective. For example, a scalar product on $Bbb R^2$ cannot be represented this way (exercise 1 for you). In fact, a tensor is of this form iff the matrix $T_{ij}$ defined above in coordinates, has rank 1 (exercise 2 for you)

  2. This map is even not linear! (assuming the linear structure on $V^*times V^*$ such as $(alpha,beta)+(alpha',beta')=(alpha+alpha', beta+beta')$). Try to verify that $(alphaotimes beta)(v,w)+(alpha'otimesbeta')(v,w)neq (alpha+alpha')otimes (beta+beta')(v,w)$ in general (exercise 3 for you).

  3. This map, however, is multilinear: $(alpha+alpha')otimes beta=alphaotimes beta+alpha'otimes beta$ and similarly in the second component (exercise 4)

  4. There is another, maybe more natural, definition of tensor product that you may like more and that completely avoids the "duals". You can consider $V^*otimes V^*$ to be an abstract vector space freely generated by vectors $alphaotimes beta$ where $alpha,betain V^*$ where you have only the relations $(alpha+alpha')otimes beta=alphaotimes beta + alpha'otimes beta$, $(kalpha)otimes beta=k(alphaotimes beta)$ etc. In other words, you can just get used to these relations, completely ignoring a meaning of these objects. Then you don't need all those duals that bother you. (Exercise 5: show the equivalence of these two approaches)


Further, I have two general recommendations. Think of a bilinear form, which is a tensor of type $(0,2)$, that is, an element of $V^*otimes V^*$. In coordinates, just a "matrix" $T_{ij}$ that acts on two vectors. Maybe it helps to go around all these texts with this in mind. Maybe try to show (exercise 6) that a bilinear form can be identified with a linear map $Votimes Vto K$ (this is confusing, I know)



The second recommendation: as this material is very standard and taught at all universities, it is questionable whether the best way to learn it is just from the internet (if you do so, I don't know). Maybe going over a book or some Coursarea course might help.






share|cite|improve this answer











$endgroup$













  • $begingroup$
    Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 23:40






  • 1




    $begingroup$
    @AntoniParellada I tried to add clearer references, hope it is better now
    $endgroup$
    – Peter Franek
    Feb 12 '17 at 23:45










  • $begingroup$
    It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:28






  • 1




    $begingroup$
    @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
    $endgroup$
    – Peter Franek
    Feb 13 '17 at 13:18












  • $begingroup$
    First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 13:32





















1












$begingroup$

What are all the $(1,0)$ tensors on $V$? They're linear functions from $V^{*}$ to $Bbb R$. It turns out that every such linear function is just "evaluate on an element of $V$". So they correspond, 1-to-1, with elements of $V$. (This doesn't work in infinite dimensions, but for finite dimensions--definitely.)



What about $(2,0)$ tensors on $V$? They're bilinear functions on $V^{*}$. So they take in two covectors and produce a number. One way to produce such a thing is to pick a pair of vectors $v$ and $w$ and write down



$$
T_{v,w}(phi, theta) = (phi(v)) cdot (theta(w))
$$



Of course, if you picked the pair $(2v, w/2)$, you'd get the same bilinear function. And also the same bilinear function for $(v/3, 3w)$, etc. In fact, EVERY bilinear function on $V^{*}$ looks like "evaluate on two vectors and take the product of the results" (or a sum of such things).



So these bilinear functions correspond exactly to elements of $V otimes V$.



Are you seeing the pattern here?






share|cite|improve this answer









$endgroup$













  • $begingroup$
    I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
    $endgroup$
    – user4894
    Feb 10 '17 at 18:19






  • 1




    $begingroup$
    I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:22










  • $begingroup$
    @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
    $endgroup$
    – user4894
    Feb 10 '17 at 18:37










  • $begingroup$
    Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:44










  • $begingroup$
    Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 2:23





















1












$begingroup$

[This will never be the accepted answer, so no worries.]



I found in Introduction to Vectors and Tensors: Linear and Multilinear Algebra by Ray M. Bowen and C.-C. Wang the following exposition of the functions in the OP to be very enlightening, and I'm writing it here with minimal modifications:



Step 1:



If $mathbf{v}in mathscr V$ and $mathbf{v^*}in mathscr V^*$, their scalar product can be computed in scalar form as follows: Choose a basis ${mathbf{e}_i}$ and its dual basis ${mathbf{e}^i}$ for $mathscr V$ and $mathscr V^*$, respectively so we can express $mathbf{v}$ and $mathbf{v}^*$ in component form. Then we have



$$left<mathbf v^*, mathbf vright>=left<v_i;mathbf e^i, v^j ; mathbf e_jright>=v_i,v^j left<mathbf e^i, mathbf e_jright>= v_i,v^j; delta_j^i=v_i,v^i$$



Step 2:



If $mathscr V_1,cdots,mathscr V_s$ is a collection of vector spaces, then a $s$-linear function (multilinear function) is a function



$$mathbf A: mathscr V_1,cdots,mathscr V_srightarrow mathbb R$$



that is linear in each of its variables while the other variables are held constant. If the vector spaces $mathscr V_1,cdots,mathscr V_s$ are the vector space $mathscr V$ OR its dual space $mathscr V^*,$ then $mathbf A$ is called a TENSOR.



More specifically, a tensor of order $(p,q)$ on $mathscr V$, where $p$ and $q$ are positive integers, is a linear function



$$bbox[5px,border:2px solid aqua]{T^p_q,V equiv underset{ptext{ times}}{underbrace{mathscr V^*timescdotstimes mathscr V^*}}times underset{qtext{ times}}{underbrace{mathscr Vtimescdotstimes mathscr V}} rightarrow mathbb R}$$



Step 3:



If $mathbf v$ is a vector in $mathscr V$ and $mathbf v^*$ is a covector in $mathscr V^*$, then we define the function



$$mathbf v ,otimes, mathbf v^*: mathscr V^* times mathscr V rightarrow mathbb R$$



by



$$mathbf v ,otimes, mathbf v^*,(mathbf u^*,,,mathbf u)equiv left<mathbf u^*,mathbf vright>left<mathbf v^*,mathbf uright>$$



for all $mathbf u^*in mathscr V^*, mathbf uin mathscr V.$ Clearly $mathbf v ,otimes, mathbf v^*$ is a bilinear function, so $mathbf v ,otimes, mathbf v^*in T^1_1(mathscr V).$



Step 4:



The tensor product can be defined for arbitrary number of vectors and covectors. Let $mathbf v_1,cdots,mathbf v_p$ be vectors in $mathscr V$ and $mathbf v^1, cdots, mathbf v^q$ be covectors in $mathscr V^*.$ Then we define the function



$$mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q:mathscr V^*,times,cdots,timesmathscr V^*,timesmathscr V,timescdots,times,mathscr Vrightarrow mathbb R$$



by



$$bbox[5px,border:2px solid aqua]{begin{align}&mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q,left(mathbf u^1,cdots,mathbf u^p,mathbf u_1,cdots,mathbf u_qright)\[2ex]
&equivleft<mathbf u^1,mathbf v_1 right>,cdots left<mathbf u^p,mathbf v_p right>,left<mathbf v^1,mathbf u_1 right>,cdotsleft<mathbf v^q,mathbf u_q right>,
end{align}}$$



for all $mathbf u_1,cdots,mathbf u_qin mathscr V$ and $mathbf u^1,cdots,mathbf u^qin mathscr V^*.$ Clearly this function is $(p+q)$-linear, so that



$$mathbf v_1 otimes cdotsotimesmathbf v_potimesmathbf V^1otimescdotsotimes mathbf v^q in T^p_q(mathscr V)$$



is called the TENSOR PRODUCT of $mathbf v_1,cdots,mathbf v_p$ and $mathbf v^1,cdots,mathbf v^q.$






share|cite|improve this answer











$endgroup$














    Your Answer





    StackExchange.ifUsing("editor", function () {
    return StackExchange.using("mathjaxEditing", function () {
    StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
    StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
    });
    });
    }, "mathjax-editing");

    StackExchange.ready(function() {
    var channelOptions = {
    tags: "".split(" "),
    id: "69"
    };
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function() {
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled) {
    StackExchange.using("snippets", function() {
    createEditor();
    });
    }
    else {
    createEditor();
    }
    });

    function createEditor() {
    StackExchange.prepareEditor({
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader: {
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    },
    noCode: true, onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    });


    }
    });














    draft saved

    draft discarded


















    StackExchange.ready(
    function () {
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2138459%2funderstanding-the-definition-of-tensors-as-multilinear-maps%23new-answer', 'question_page');
    }
    );

    Post as a guest















    Required, but never shown

























    6 Answers
    6






    active

    oldest

    votes








    6 Answers
    6






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    7





    +100







    $begingroup$

    Let's first look at a very special type of tensor, namely the (0,1) tensor. What is it? Well, it is the tensor product of $0$ copies of members of $V$ and one copy of members of $V^*$. That is, it is a member of $V^*$.



    But what is a member of $V^*$? Well, by the very definition of $V^*$ is is a linear function $phi:Vto K$. Let's write this explicitly:
    $$T^0_1V = V^* = {phi:Vto K|phi text{ is linear}}$$
    You see, already at this point, where we didn't even use a tensor product, we get a $V^*$ on one side, and a $V$ on the other, simply by inserting the definition of $V^*$.



    From this, it is obvious why $(0,q)$-tensors have $q$ copies of $V^*$ in the tensor product $(2)$, but $q$ copies of $V$ in the domain of the multilinear function in $(3)$.



    OK, but why do you have a $V^*$ in the map in $(3)$ for each factor $V$ in the tensor product? After all, vectors are not functions, are they?



    Well, in some sense they are: There is a natural linear map from $V$ to its double dual $V^{**}$, that is, the set of linear functions from $V^*$ to $K$. Indeed, for finite dimensional vector spaces, you even have that $V^{**} cong V$. This natural map is defined by the condition that applying the image of $v$ to $phiin V^*$ gives the same value as applying $phi$ to $v$. I suspect that the lecture assumes finite dimensional vector spaces. In that case, you can identify $V$ with $V^{**}$, and therefore you get
    $$T^1_0V = V = V^{**} = {T:V^*to K|T text{ is linear}}$$
    Here the second equality is exactly that identification.



    Now again it should be obvious why $p$ copies of $V$ in the tensor product $(2)$ give $p$ factors of $V^*$ for the domain of the multilinear functions in $(3)$.



    Edit: On request in the comments, something about the relations of those terms to the Kronecker product.



    The tensor product $color{darkorange}{otimes}$ in $(2)$ is a tensor product not of (co)vectors, but of (co)vector spaces. The result of that tensor product describes not one tensor, but the set of all tensors of a given type. The tensors are then elements of the corresponding set. And given a basis of $V$, the tensors can then be specified by giving their coefficients in that basis.



    This is completely analogous to the vector space itself. We have the vector space, $V$, this vector space contains vectors $vin V$, and given a basis ${e_i}$ of $V$, we can write the vector in components, $v = sum_i v^i e_i$.



    Similarly for $V^*$, we can write each member $phiin V^*$ in the dual basis $omega^i$ (defined by $omega^i(e_j)=delta^i_j$) as $sum_i phi_i omega^i$. An alternative way to get the components $phi_i$ is to notice that $phi(e_k) = sum_i phi_i omega^i(e_k) = sum_i phi_i delta^i_k = phi_k$. That is, the components of the covector are just the function values at the basis vectors.



    This way one also sees immediately that $phi(v) = sum_i phi(v^i e_i) = sum_i v^iphi(e_i) = sum_i v^i phi_i$, which is sort of like an inner product, but not exactly, because it behaves differently at change of basis.



    Now let's look at a $(0,2)$ tensor, that is, a bilinear function $f:Vtimes Vto K$. Note that $fin V^*color{darkorange}{otimes} V^*$, as $V^*color{darkorange}{otimes} V^*$ is by definition the set of all such functions (see eq. $(3)$). Now by being a bilinear function, one again only needs to know the values at the basis vectors, as
    $$f(v,w) = f(sum_i v^i e_i, sum_j w^j e_j) = sum_{i,j}v^i w^j f(e_i,e_j)$$
    and therefore we can define as components $f_{ij} = f(e_i,e_j)$ and get $f(v,w)=sum_{i,j}f_{ij}v^iw^j$.



    This goes also for general tensors: A single tensor $Tin T^p_qV$ is a multilinear function $T:(V^*)^ptimes V^qto K$, and it is completely determined by the values you get when inserting basis vectors and basis covectors everywhere, giving the components
    $$T^{ildots j}_{kldots l}=T(underbrace{omega^i,ldots,omega^j}_{p},underbrace{e_k,ldots,e_l}_{q})$$



    OK, we now have components, but we have still not defined the tensor product of tensors. But that is actually quite easy:



    Be $xin T^p_qV$, and $yin T^r_sV$. That is, $x$ is a function that takes $p$ covectors and $q$ vectors, and gives a scalar, while $y$ takes $r$ covectors and $s$ vectors to a scalar. Then the tensor product $xcolor{blue}{otimes} y$ is a function that takes $p+r$ covectors and $q+s$ vectors, feeds the first $p$ covectors and the first $q$ vectors to $x$, and the remaining $r$ covectors and $s$ vectors to $y$, and them multiplies the result. That is,
    $$(xcolor{blue}{otimes} y)(underbrace{kappa,ldots,lambda,mu,ldots,nu}_{p+r},underbrace{u,ldots,v,w,ldots,x}_{q+s}) = x(underbrace{kappa,ldots,lambda}_p,underbrace{u,ldots,v}_q)cdot y(underbrace{mu,ldots,nu}_{r},underbrace{w,ldots,x}_{s})$$
    It is not hard to check that this function is indeed also multilinear, and therefore $xcolor{blue}{otimes} yin T^{p+r}_{q+s}V$.



    And now finally, we get to the question what the components of $xcolor{blue}{otimes} y$ are. Well, the components of $xcolor{blue}{otimes} y$ are just the function values when inserting basis vectors and basis covectors, and when you do that and use the definition of the tensor product, you find that indeed, the components of the tensor product are the Kronecker product of the components of the factors.



    Also, it can be shown that $T^p_qV$ is a vector space in its own right, and therefore the $(p,q)$-tensors can be written as the linear combination of a basis that is $1$ exactly for one combination of basis vectors and basis covectors and $0$ for all other combinations. However it can then easily be seen that this is just the tensor product of the corresponding dual covectors/vectors. Since furthermore in that basis, the coefficients on the basis vectors are just the components of the tensor as introduced before, we finally arrive at the formula
    $$T = sum T^{ildots j}_{kldots l}underbrace{e_icolor{blue}{otimes}dotscolor{blue}{otimes} e_j}_{p}color{blue}{otimes}underbrace{omega^kcolor{blue}{otimes}dots,color{blue}{otimes};omega^l}_{q}$$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:25










    • $begingroup$
      Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
      $endgroup$
      – celtschk
      Feb 13 '17 at 0:34










    • $begingroup$
      Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:50








    • 1




      $begingroup$
      @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
      $endgroup$
      – celtschk
      Feb 13 '17 at 21:04






    • 1




      $begingroup$
      @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
      $endgroup$
      – celtschk
      Feb 20 '17 at 20:51


















    7





    +100







    $begingroup$

    Let's first look at a very special type of tensor, namely the (0,1) tensor. What is it? Well, it is the tensor product of $0$ copies of members of $V$ and one copy of members of $V^*$. That is, it is a member of $V^*$.



    But what is a member of $V^*$? Well, by the very definition of $V^*$ is is a linear function $phi:Vto K$. Let's write this explicitly:
    $$T^0_1V = V^* = {phi:Vto K|phi text{ is linear}}$$
    You see, already at this point, where we didn't even use a tensor product, we get a $V^*$ on one side, and a $V$ on the other, simply by inserting the definition of $V^*$.



    From this, it is obvious why $(0,q)$-tensors have $q$ copies of $V^*$ in the tensor product $(2)$, but $q$ copies of $V$ in the domain of the multilinear function in $(3)$.



    OK, but why do you have a $V^*$ in the map in $(3)$ for each factor $V$ in the tensor product? After all, vectors are not functions, are they?



    Well, in some sense they are: There is a natural linear map from $V$ to its double dual $V^{**}$, that is, the set of linear functions from $V^*$ to $K$. Indeed, for finite dimensional vector spaces, you even have that $V^{**} cong V$. This natural map is defined by the condition that applying the image of $v$ to $phiin V^*$ gives the same value as applying $phi$ to $v$. I suspect that the lecture assumes finite dimensional vector spaces. In that case, you can identify $V$ with $V^{**}$, and therefore you get
    $$T^1_0V = V = V^{**} = {T:V^*to K|T text{ is linear}}$$
    Here the second equality is exactly that identification.



    Now again it should be obvious why $p$ copies of $V$ in the tensor product $(2)$ give $p$ factors of $V^*$ for the domain of the multilinear functions in $(3)$.



    Edit: On request in the comments, something about the relations of those terms to the Kronecker product.



    The tensor product $color{darkorange}{otimes}$ in $(2)$ is a tensor product not of (co)vectors, but of (co)vector spaces. The result of that tensor product describes not one tensor, but the set of all tensors of a given type. The tensors are then elements of the corresponding set. And given a basis of $V$, the tensors can then be specified by giving their coefficients in that basis.



    This is completely analogous to the vector space itself. We have the vector space, $V$, this vector space contains vectors $vin V$, and given a basis ${e_i}$ of $V$, we can write the vector in components, $v = sum_i v^i e_i$.



    Similarly for $V^*$, we can write each member $phiin V^*$ in the dual basis $omega^i$ (defined by $omega^i(e_j)=delta^i_j$) as $sum_i phi_i omega^i$. An alternative way to get the components $phi_i$ is to notice that $phi(e_k) = sum_i phi_i omega^i(e_k) = sum_i phi_i delta^i_k = phi_k$. That is, the components of the covector are just the function values at the basis vectors.



    This way one also sees immediately that $phi(v) = sum_i phi(v^i e_i) = sum_i v^iphi(e_i) = sum_i v^i phi_i$, which is sort of like an inner product, but not exactly, because it behaves differently at change of basis.



    Now let's look at a $(0,2)$ tensor, that is, a bilinear function $f:Vtimes Vto K$. Note that $fin V^*color{darkorange}{otimes} V^*$, as $V^*color{darkorange}{otimes} V^*$ is by definition the set of all such functions (see eq. $(3)$). Now by being a bilinear function, one again only needs to know the values at the basis vectors, as
    $$f(v,w) = f(sum_i v^i e_i, sum_j w^j e_j) = sum_{i,j}v^i w^j f(e_i,e_j)$$
    and therefore we can define as components $f_{ij} = f(e_i,e_j)$ and get $f(v,w)=sum_{i,j}f_{ij}v^iw^j$.



    This goes also for general tensors: A single tensor $Tin T^p_qV$ is a multilinear function $T:(V^*)^ptimes V^qto K$, and it is completely determined by the values you get when inserting basis vectors and basis covectors everywhere, giving the components
    $$T^{ildots j}_{kldots l}=T(underbrace{omega^i,ldots,omega^j}_{p},underbrace{e_k,ldots,e_l}_{q})$$



    OK, we now have components, but we have still not defined the tensor product of tensors. But that is actually quite easy:



    Be $xin T^p_qV$, and $yin T^r_sV$. That is, $x$ is a function that takes $p$ covectors and $q$ vectors, and gives a scalar, while $y$ takes $r$ covectors and $s$ vectors to a scalar. Then the tensor product $xcolor{blue}{otimes} y$ is a function that takes $p+r$ covectors and $q+s$ vectors, feeds the first $p$ covectors and the first $q$ vectors to $x$, and the remaining $r$ covectors and $s$ vectors to $y$, and them multiplies the result. That is,
    $$(xcolor{blue}{otimes} y)(underbrace{kappa,ldots,lambda,mu,ldots,nu}_{p+r},underbrace{u,ldots,v,w,ldots,x}_{q+s}) = x(underbrace{kappa,ldots,lambda}_p,underbrace{u,ldots,v}_q)cdot y(underbrace{mu,ldots,nu}_{r},underbrace{w,ldots,x}_{s})$$
    It is not hard to check that this function is indeed also multilinear, and therefore $xcolor{blue}{otimes} yin T^{p+r}_{q+s}V$.



    And now finally, we get to the question what the components of $xcolor{blue}{otimes} y$ are. Well, the components of $xcolor{blue}{otimes} y$ are just the function values when inserting basis vectors and basis covectors, and when you do that and use the definition of the tensor product, you find that indeed, the components of the tensor product are the Kronecker product of the components of the factors.



    Also, it can be shown that $T^p_qV$ is a vector space in its own right, and therefore the $(p,q)$-tensors can be written as the linear combination of a basis that is $1$ exactly for one combination of basis vectors and basis covectors and $0$ for all other combinations. However it can then easily be seen that this is just the tensor product of the corresponding dual covectors/vectors. Since furthermore in that basis, the coefficients on the basis vectors are just the components of the tensor as introduced before, we finally arrive at the formula
    $$T = sum T^{ildots j}_{kldots l}underbrace{e_icolor{blue}{otimes}dotscolor{blue}{otimes} e_j}_{p}color{blue}{otimes}underbrace{omega^kcolor{blue}{otimes}dots,color{blue}{otimes};omega^l}_{q}$$






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:25










    • $begingroup$
      Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
      $endgroup$
      – celtschk
      Feb 13 '17 at 0:34










    • $begingroup$
      Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:50








    • 1




      $begingroup$
      @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
      $endgroup$
      – celtschk
      Feb 13 '17 at 21:04






    • 1




      $begingroup$
      @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
      $endgroup$
      – celtschk
      Feb 20 '17 at 20:51
















    7





    +100







    7





    +100



    7




    +100



    $begingroup$

    Let's first look at a very special type of tensor, namely the (0,1) tensor. What is it? Well, it is the tensor product of $0$ copies of members of $V$ and one copy of members of $V^*$. That is, it is a member of $V^*$.



    But what is a member of $V^*$? Well, by the very definition of $V^*$ is is a linear function $phi:Vto K$. Let's write this explicitly:
    $$T^0_1V = V^* = {phi:Vto K|phi text{ is linear}}$$
    You see, already at this point, where we didn't even use a tensor product, we get a $V^*$ on one side, and a $V$ on the other, simply by inserting the definition of $V^*$.



    From this, it is obvious why $(0,q)$-tensors have $q$ copies of $V^*$ in the tensor product $(2)$, but $q$ copies of $V$ in the domain of the multilinear function in $(3)$.



    OK, but why do you have a $V^*$ in the map in $(3)$ for each factor $V$ in the tensor product? After all, vectors are not functions, are they?



    Well, in some sense they are: There is a natural linear map from $V$ to its double dual $V^{**}$, that is, the set of linear functions from $V^*$ to $K$. Indeed, for finite dimensional vector spaces, you even have that $V^{**} cong V$. This natural map is defined by the condition that applying the image of $v$ to $phiin V^*$ gives the same value as applying $phi$ to $v$. I suspect that the lecture assumes finite dimensional vector spaces. In that case, you can identify $V$ with $V^{**}$, and therefore you get
    $$T^1_0V = V = V^{**} = {T:V^*to K|T text{ is linear}}$$
    Here the second equality is exactly that identification.



    Now again it should be obvious why $p$ copies of $V$ in the tensor product $(2)$ give $p$ factors of $V^*$ for the domain of the multilinear functions in $(3)$.



    Edit: On request in the comments, something about the relations of those terms to the Kronecker product.



    The tensor product $color{darkorange}{otimes}$ in $(2)$ is a tensor product not of (co)vectors, but of (co)vector spaces. The result of that tensor product describes not one tensor, but the set of all tensors of a given type. The tensors are then elements of the corresponding set. And given a basis of $V$, the tensors can then be specified by giving their coefficients in that basis.



    This is completely analogous to the vector space itself. We have the vector space, $V$, this vector space contains vectors $vin V$, and given a basis ${e_i}$ of $V$, we can write the vector in components, $v = sum_i v^i e_i$.



    Similarly for $V^*$, we can write each member $phiin V^*$ in the dual basis $omega^i$ (defined by $omega^i(e_j)=delta^i_j$) as $sum_i phi_i omega^i$. An alternative way to get the components $phi_i$ is to notice that $phi(e_k) = sum_i phi_i omega^i(e_k) = sum_i phi_i delta^i_k = phi_k$. That is, the components of the covector are just the function values at the basis vectors.



    This way one also sees immediately that $phi(v) = sum_i phi(v^i e_i) = sum_i v^iphi(e_i) = sum_i v^i phi_i$, which is sort of like an inner product, but not exactly, because it behaves differently at change of basis.



    Now let's look at a $(0,2)$ tensor, that is, a bilinear function $f:Vtimes Vto K$. Note that $fin V^*color{darkorange}{otimes} V^*$, as $V^*color{darkorange}{otimes} V^*$ is by definition the set of all such functions (see eq. $(3)$). Now by being a bilinear function, one again only needs to know the values at the basis vectors, as
    $$f(v,w) = f(sum_i v^i e_i, sum_j w^j e_j) = sum_{i,j}v^i w^j f(e_i,e_j)$$
    and therefore we can define as components $f_{ij} = f(e_i,e_j)$ and get $f(v,w)=sum_{i,j}f_{ij}v^iw^j$.



    This goes also for general tensors: A single tensor $Tin T^p_qV$ is a multilinear function $T:(V^*)^ptimes V^qto K$, and it is completely determined by the values you get when inserting basis vectors and basis covectors everywhere, giving the components
    $$T^{ildots j}_{kldots l}=T(underbrace{omega^i,ldots,omega^j}_{p},underbrace{e_k,ldots,e_l}_{q})$$



    OK, we now have components, but we have still not defined the tensor product of tensors. But that is actually quite easy:



    Be $xin T^p_qV$, and $yin T^r_sV$. That is, $x$ is a function that takes $p$ covectors and $q$ vectors, and gives a scalar, while $y$ takes $r$ covectors and $s$ vectors to a scalar. Then the tensor product $xcolor{blue}{otimes} y$ is a function that takes $p+r$ covectors and $q+s$ vectors, feeds the first $p$ covectors and the first $q$ vectors to $x$, and the remaining $r$ covectors and $s$ vectors to $y$, and them multiplies the result. That is,
    $$(xcolor{blue}{otimes} y)(underbrace{kappa,ldots,lambda,mu,ldots,nu}_{p+r},underbrace{u,ldots,v,w,ldots,x}_{q+s}) = x(underbrace{kappa,ldots,lambda}_p,underbrace{u,ldots,v}_q)cdot y(underbrace{mu,ldots,nu}_{r},underbrace{w,ldots,x}_{s})$$
    It is not hard to check that this function is indeed also multilinear, and therefore $xcolor{blue}{otimes} yin T^{p+r}_{q+s}V$.



    And now finally, we get to the question what the components of $xcolor{blue}{otimes} y$ are. Well, the components of $xcolor{blue}{otimes} y$ are just the function values when inserting basis vectors and basis covectors, and when you do that and use the definition of the tensor product, you find that indeed, the components of the tensor product are the Kronecker product of the components of the factors.



    Also, it can be shown that $T^p_qV$ is a vector space in its own right, and therefore the $(p,q)$-tensors can be written as the linear combination of a basis that is $1$ exactly for one combination of basis vectors and basis covectors and $0$ for all other combinations. However it can then easily be seen that this is just the tensor product of the corresponding dual covectors/vectors. Since furthermore in that basis, the coefficients on the basis vectors are just the components of the tensor as introduced before, we finally arrive at the formula
    $$T = sum T^{ildots j}_{kldots l}underbrace{e_icolor{blue}{otimes}dotscolor{blue}{otimes} e_j}_{p}color{blue}{otimes}underbrace{omega^kcolor{blue}{otimes}dots,color{blue}{otimes};omega^l}_{q}$$






    share|cite|improve this answer











    $endgroup$



    Let's first look at a very special type of tensor, namely the (0,1) tensor. What is it? Well, it is the tensor product of $0$ copies of members of $V$ and one copy of members of $V^*$. That is, it is a member of $V^*$.



    But what is a member of $V^*$? Well, by the very definition of $V^*$ is is a linear function $phi:Vto K$. Let's write this explicitly:
    $$T^0_1V = V^* = {phi:Vto K|phi text{ is linear}}$$
    You see, already at this point, where we didn't even use a tensor product, we get a $V^*$ on one side, and a $V$ on the other, simply by inserting the definition of $V^*$.



    From this, it is obvious why $(0,q)$-tensors have $q$ copies of $V^*$ in the tensor product $(2)$, but $q$ copies of $V$ in the domain of the multilinear function in $(3)$.



    OK, but why do you have a $V^*$ in the map in $(3)$ for each factor $V$ in the tensor product? After all, vectors are not functions, are they?



    Well, in some sense they are: There is a natural linear map from $V$ to its double dual $V^{**}$, that is, the set of linear functions from $V^*$ to $K$. Indeed, for finite dimensional vector spaces, you even have that $V^{**} cong V$. This natural map is defined by the condition that applying the image of $v$ to $phiin V^*$ gives the same value as applying $phi$ to $v$. I suspect that the lecture assumes finite dimensional vector spaces. In that case, you can identify $V$ with $V^{**}$, and therefore you get
    $$T^1_0V = V = V^{**} = {T:V^*to K|T text{ is linear}}$$
    Here the second equality is exactly that identification.



    Now again it should be obvious why $p$ copies of $V$ in the tensor product $(2)$ give $p$ factors of $V^*$ for the domain of the multilinear functions in $(3)$.



    Edit: On request in the comments, something about the relations of those terms to the Kronecker product.



    The tensor product $color{darkorange}{otimes}$ in $(2)$ is a tensor product not of (co)vectors, but of (co)vector spaces. The result of that tensor product describes not one tensor, but the set of all tensors of a given type. The tensors are then elements of the corresponding set. And given a basis of $V$, the tensors can then be specified by giving their coefficients in that basis.



    This is completely analogous to the vector space itself. We have the vector space, $V$, this vector space contains vectors $vin V$, and given a basis ${e_i}$ of $V$, we can write the vector in components, $v = sum_i v^i e_i$.



    Similarly for $V^*$, we can write each member $phiin V^*$ in the dual basis $omega^i$ (defined by $omega^i(e_j)=delta^i_j$) as $sum_i phi_i omega^i$. An alternative way to get the components $phi_i$ is to notice that $phi(e_k) = sum_i phi_i omega^i(e_k) = sum_i phi_i delta^i_k = phi_k$. That is, the components of the covector are just the function values at the basis vectors.



    This way one also sees immediately that $phi(v) = sum_i phi(v^i e_i) = sum_i v^iphi(e_i) = sum_i v^i phi_i$, which is sort of like an inner product, but not exactly, because it behaves differently at change of basis.



    Now let's look at a $(0,2)$ tensor, that is, a bilinear function $f:Vtimes Vto K$. Note that $fin V^*color{darkorange}{otimes} V^*$, as $V^*color{darkorange}{otimes} V^*$ is by definition the set of all such functions (see eq. $(3)$). Now by being a bilinear function, one again only needs to know the values at the basis vectors, as
    $$f(v,w) = f(sum_i v^i e_i, sum_j w^j e_j) = sum_{i,j}v^i w^j f(e_i,e_j)$$
    and therefore we can define as components $f_{ij} = f(e_i,e_j)$ and get $f(v,w)=sum_{i,j}f_{ij}v^iw^j$.



    This goes also for general tensors: A single tensor $Tin T^p_qV$ is a multilinear function $T:(V^*)^ptimes V^qto K$, and it is completely determined by the values you get when inserting basis vectors and basis covectors everywhere, giving the components
    $$T^{ildots j}_{kldots l}=T(underbrace{omega^i,ldots,omega^j}_{p},underbrace{e_k,ldots,e_l}_{q})$$



    OK, we now have components, but we have still not defined the tensor product of tensors. But that is actually quite easy:



    Be $xin T^p_qV$, and $yin T^r_sV$. That is, $x$ is a function that takes $p$ covectors and $q$ vectors, and gives a scalar, while $y$ takes $r$ covectors and $s$ vectors to a scalar. Then the tensor product $xcolor{blue}{otimes} y$ is a function that takes $p+r$ covectors and $q+s$ vectors, feeds the first $p$ covectors and the first $q$ vectors to $x$, and the remaining $r$ covectors and $s$ vectors to $y$, and them multiplies the result. That is,
    $$(xcolor{blue}{otimes} y)(underbrace{kappa,ldots,lambda,mu,ldots,nu}_{p+r},underbrace{u,ldots,v,w,ldots,x}_{q+s}) = x(underbrace{kappa,ldots,lambda}_p,underbrace{u,ldots,v}_q)cdot y(underbrace{mu,ldots,nu}_{r},underbrace{w,ldots,x}_{s})$$
    It is not hard to check that this function is indeed also multilinear, and therefore $xcolor{blue}{otimes} yin T^{p+r}_{q+s}V$.



    And now finally, we get to the question what the components of $xcolor{blue}{otimes} y$ are. Well, the components of $xcolor{blue}{otimes} y$ are just the function values when inserting basis vectors and basis covectors, and when you do that and use the definition of the tensor product, you find that indeed, the components of the tensor product are the Kronecker product of the components of the factors.



    Also, it can be shown that $T^p_qV$ is a vector space in its own right, and therefore the $(p,q)$-tensors can be written as the linear combination of a basis that is $1$ exactly for one combination of basis vectors and basis covectors and $0$ for all other combinations. However it can then easily be seen that this is just the tensor product of the corresponding dual covectors/vectors. Since furthermore in that basis, the coefficients on the basis vectors are just the components of the tensor as introduced before, we finally arrive at the formula
    $$T = sum T^{ildots j}_{kldots l}underbrace{e_icolor{blue}{otimes}dotscolor{blue}{otimes} e_j}_{p}color{blue}{otimes}underbrace{omega^kcolor{blue}{otimes}dots,color{blue}{otimes};omega^l}_{q}$$







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Feb 20 '17 at 20:49

























    answered Feb 13 '17 at 0:17









    celtschkceltschk

    30.4k755101




    30.4k755101












    • $begingroup$
      Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:25










    • $begingroup$
      Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
      $endgroup$
      – celtschk
      Feb 13 '17 at 0:34










    • $begingroup$
      Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:50








    • 1




      $begingroup$
      @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
      $endgroup$
      – celtschk
      Feb 13 '17 at 21:04






    • 1




      $begingroup$
      @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
      $endgroup$
      – celtschk
      Feb 20 '17 at 20:51




















    • $begingroup$
      Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:25










    • $begingroup$
      Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
      $endgroup$
      – celtschk
      Feb 13 '17 at 0:34










    • $begingroup$
      Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:50








    • 1




      $begingroup$
      @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
      $endgroup$
      – celtschk
      Feb 13 '17 at 21:04






    • 1




      $begingroup$
      @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
      $endgroup$
      – celtschk
      Feb 20 '17 at 20:51


















    $begingroup$
    Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:25




    $begingroup$
    Fantastic answer... Could you make it easier to follow by "mapping" the terms "in the tensor product" and "the multi-linear function" with, I presume equation (2) and (3) respectively? Also, I presume the actual number crunching (sorry about the term) would occur in Eq. (2), while Eq. (3) is more of a "theoretical" or "mathy" definition of a set... somehow... very tentative...
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:25












    $begingroup$
    Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
    $endgroup$
    – celtschk
    Feb 13 '17 at 0:34




    $begingroup$
    Thank you. I have added those equation references (I hope I understood correctlywhat you mean). Anyway, (2) introduces the formal notation, while (3) really says what you can do with it (namely insert vectors and covectors in order to obtain a scalar). So the "number crunching" actually happens in (3).
    $endgroup$
    – celtschk
    Feb 13 '17 at 0:34












    $begingroup$
    Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:50






    $begingroup$
    Also, regarding the "number crunching" I understand your point in the comment, but I know how to do a matrix Kronecker multiplication: $begin{bmatrix}2\3end{bmatrix}begin{bmatrix}1&2end{bmatrix}=begin{bmatrix}2&4\3&6end{bmatrix}$, which I presume is the $otimes$ operation in Eq.(2), but I imagine the Cartesian product $bf times$ in Eq. (3) as basically a matrix of paired elements $(begin{bmatrix}(V_1,V^*_1)&(V_2,V^*_2)\ (V_3,V^*_4)&(V_{45},V^*_{67})end{bmatrix})$ (vignette without meaning), rather than a numerical result. I'd say this could be the last hurdle to "getting it".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:50






    1




    1




    $begingroup$
    @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
    $endgroup$
    – celtschk
    Feb 13 '17 at 21:04




    $begingroup$
    @AntoniParellada: See my edit (it actually got quite a bit longer than originally planned :-))
    $endgroup$
    – celtschk
    Feb 13 '17 at 21:04




    1




    1




    $begingroup$
    @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
    $endgroup$
    – celtschk
    Feb 20 '17 at 20:51






    $begingroup$
    @AntoniParellada: OK, final one: I apparently didn't read carefully enough. Now I actually found the error you were trying to point out the whole time, and corrected it. Thank you very much, and sorry for my partial blindness. Anyway, the other edits also corrected an error and in addition I think removed some possible source of confusion, so it the end my "blindness" about it actually was helpful ;-)
    $endgroup$
    – celtschk
    Feb 20 '17 at 20:51













    5












    $begingroup$

    Taking into account what has been said here, recall that if ${e_1,...,e_n}$ is a basis of $V$ and ${omega^1,...,omega^n}$ its dual basis, then any $Tinmathcal T^{(p,q)}$ can be written:




    begin{equation} T=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} end{equation}




    Think about the simple cases. Tensors of type $(0,1)$ are linear maps $T:Vrightarrow K$, that is, elements of $V^*$. Now, tensors of type $(0,2)$ are usually called bilinear forms. They are multilinear maps $T:Vtimes V rightarrow K$. Now, the easiest way of building a $(0,2)$-tensor is picking two covectors $f_1$ and $f_2$, and use tensorial product:
    $$T=f_1otimes f_2$$
    (note that I wrote sub-indices, there is no difference between writing super-indices, and it's advisable to do it in the general case, but for $mathcal T^{(0,2)}(V)$ it's all right).



    Now, using the first equation, every $(0,2)$-tensor is like this:
    $$T=sum lambda_{ij}  omega^iotimes omega^j$$
    (recall that ${omega^1,...,omega^n})$ is a basis of $V^*$.
    So now you can see that $mathcal T^{(0,2)}(V)=V^*otimes V^*$.



    In the general case, all tensors are sums of multiples of tensors like this:
    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} qquad qquad (1)$$



    so this is why $mathcal T^{(p,q)}(V)=Votimes overset{p text{ times}}{...} otimes V otimes V^*otimes overset{q text{ times}}{...} otimes V^*$





    Let us focus on tensors like $(1)$.
    If $v_1, ..., v_qin V$ and $xi^1, ... xi^pin V^*$, then



    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q}(xi_1, ... xi_q,v_1, ..., v_p,)=e_{i_1}(xi^1)cdots e_{i_p}(xi^p) cdot omega^{j_1}(v_1)cdots omega^{j_q}(v_q)$$



    as I suppose you know. So, in a certain way, you could say that these $omega$'s are waiting for a vector to produce numbers that later will be multiplied.



    Now, forget that you know the elements of $V^*$ are functionals. They now are, simply, vectors, (as they are elements of a vector space), and its dual is $V^{**}=V$. So the $e$'s, considered as functionals over $V^*$ are also waiting for a vector (but now a vector is an element of $V^*$) to produce numbers that will be multiplied.



    The difficulties with this issues is that one has to keep in mind the different points of view. It is true that $V^*$ is the dual of $V$, and the elements of $V^*$ are functionals over $V$. But, as $V^*$ is again a vector space, we can forget for a moment about $V$, considering the elements of $V^*$ as simple vectors, and take the dual $V^{**}$. So, in the end, vectors and covectors behave in really similar ways.



    Regarding the isomorphism $phi: Vto V^{**}$, if we pick $vin V$. So now, $phi(v)$ is a functional in $V^*$, and to define who $v$ is, we have to say what is the value of $phi(v)(omega)in K$ for every $omegain V^*$. But we know $omega$ is a functional on $V$, so $omega(v)in K$. So we use this fact to define $phi(v)$:




    $$phi(v)(omega)=omega(v)$$




    In the end, we would end writing $phi(v)=v$ so in the end, we get $$v(omega)=omega(v)$$
    and this is why the theorem that states the isomorphism $Vto V^{**}$ is called reflexivity theorem. When you see $v(omega)$, just think it is the same as $omega(v)$.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 22:02










    • $begingroup$
      I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 12 '17 at 22:07










    • $begingroup$
      I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53






    • 1




      $begingroup$
      Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53












    • $begingroup$
      Yes, exactly. I'm editing the answer to better explain the second paragraph.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 13 '17 at 21:53
















    5












    $begingroup$

    Taking into account what has been said here, recall that if ${e_1,...,e_n}$ is a basis of $V$ and ${omega^1,...,omega^n}$ its dual basis, then any $Tinmathcal T^{(p,q)}$ can be written:




    begin{equation} T=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} end{equation}




    Think about the simple cases. Tensors of type $(0,1)$ are linear maps $T:Vrightarrow K$, that is, elements of $V^*$. Now, tensors of type $(0,2)$ are usually called bilinear forms. They are multilinear maps $T:Vtimes V rightarrow K$. Now, the easiest way of building a $(0,2)$-tensor is picking two covectors $f_1$ and $f_2$, and use tensorial product:
    $$T=f_1otimes f_2$$
    (note that I wrote sub-indices, there is no difference between writing super-indices, and it's advisable to do it in the general case, but for $mathcal T^{(0,2)}(V)$ it's all right).



    Now, using the first equation, every $(0,2)$-tensor is like this:
    $$T=sum lambda_{ij}  omega^iotimes omega^j$$
    (recall that ${omega^1,...,omega^n})$ is a basis of $V^*$.
    So now you can see that $mathcal T^{(0,2)}(V)=V^*otimes V^*$.



    In the general case, all tensors are sums of multiples of tensors like this:
    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} qquad qquad (1)$$



    so this is why $mathcal T^{(p,q)}(V)=Votimes overset{p text{ times}}{...} otimes V otimes V^*otimes overset{q text{ times}}{...} otimes V^*$





    Let us focus on tensors like $(1)$.
    If $v_1, ..., v_qin V$ and $xi^1, ... xi^pin V^*$, then



    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q}(xi_1, ... xi_q,v_1, ..., v_p,)=e_{i_1}(xi^1)cdots e_{i_p}(xi^p) cdot omega^{j_1}(v_1)cdots omega^{j_q}(v_q)$$



    as I suppose you know. So, in a certain way, you could say that these $omega$'s are waiting for a vector to produce numbers that later will be multiplied.



    Now, forget that you know the elements of $V^*$ are functionals. They now are, simply, vectors, (as they are elements of a vector space), and its dual is $V^{**}=V$. So the $e$'s, considered as functionals over $V^*$ are also waiting for a vector (but now a vector is an element of $V^*$) to produce numbers that will be multiplied.



    The difficulties with this issues is that one has to keep in mind the different points of view. It is true that $V^*$ is the dual of $V$, and the elements of $V^*$ are functionals over $V$. But, as $V^*$ is again a vector space, we can forget for a moment about $V$, considering the elements of $V^*$ as simple vectors, and take the dual $V^{**}$. So, in the end, vectors and covectors behave in really similar ways.



    Regarding the isomorphism $phi: Vto V^{**}$, if we pick $vin V$. So now, $phi(v)$ is a functional in $V^*$, and to define who $v$ is, we have to say what is the value of $phi(v)(omega)in K$ for every $omegain V^*$. But we know $omega$ is a functional on $V$, so $omega(v)in K$. So we use this fact to define $phi(v)$:




    $$phi(v)(omega)=omega(v)$$




    In the end, we would end writing $phi(v)=v$ so in the end, we get $$v(omega)=omega(v)$$
    and this is why the theorem that states the isomorphism $Vto V^{**}$ is called reflexivity theorem. When you see $v(omega)$, just think it is the same as $omega(v)$.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 22:02










    • $begingroup$
      I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 12 '17 at 22:07










    • $begingroup$
      I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53






    • 1




      $begingroup$
      Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53












    • $begingroup$
      Yes, exactly. I'm editing the answer to better explain the second paragraph.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 13 '17 at 21:53














    5












    5








    5





    $begingroup$

    Taking into account what has been said here, recall that if ${e_1,...,e_n}$ is a basis of $V$ and ${omega^1,...,omega^n}$ its dual basis, then any $Tinmathcal T^{(p,q)}$ can be written:




    begin{equation} T=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} end{equation}




    Think about the simple cases. Tensors of type $(0,1)$ are linear maps $T:Vrightarrow K$, that is, elements of $V^*$. Now, tensors of type $(0,2)$ are usually called bilinear forms. They are multilinear maps $T:Vtimes V rightarrow K$. Now, the easiest way of building a $(0,2)$-tensor is picking two covectors $f_1$ and $f_2$, and use tensorial product:
    $$T=f_1otimes f_2$$
    (note that I wrote sub-indices, there is no difference between writing super-indices, and it's advisable to do it in the general case, but for $mathcal T^{(0,2)}(V)$ it's all right).



    Now, using the first equation, every $(0,2)$-tensor is like this:
    $$T=sum lambda_{ij}  omega^iotimes omega^j$$
    (recall that ${omega^1,...,omega^n})$ is a basis of $V^*$.
    So now you can see that $mathcal T^{(0,2)}(V)=V^*otimes V^*$.



    In the general case, all tensors are sums of multiples of tensors like this:
    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} qquad qquad (1)$$



    so this is why $mathcal T^{(p,q)}(V)=Votimes overset{p text{ times}}{...} otimes V otimes V^*otimes overset{q text{ times}}{...} otimes V^*$





    Let us focus on tensors like $(1)$.
    If $v_1, ..., v_qin V$ and $xi^1, ... xi^pin V^*$, then



    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q}(xi_1, ... xi_q,v_1, ..., v_p,)=e_{i_1}(xi^1)cdots e_{i_p}(xi^p) cdot omega^{j_1}(v_1)cdots omega^{j_q}(v_q)$$



    as I suppose you know. So, in a certain way, you could say that these $omega$'s are waiting for a vector to produce numbers that later will be multiplied.



    Now, forget that you know the elements of $V^*$ are functionals. They now are, simply, vectors, (as they are elements of a vector space), and its dual is $V^{**}=V$. So the $e$'s, considered as functionals over $V^*$ are also waiting for a vector (but now a vector is an element of $V^*$) to produce numbers that will be multiplied.



    The difficulties with this issues is that one has to keep in mind the different points of view. It is true that $V^*$ is the dual of $V$, and the elements of $V^*$ are functionals over $V$. But, as $V^*$ is again a vector space, we can forget for a moment about $V$, considering the elements of $V^*$ as simple vectors, and take the dual $V^{**}$. So, in the end, vectors and covectors behave in really similar ways.



    Regarding the isomorphism $phi: Vto V^{**}$, if we pick $vin V$. So now, $phi(v)$ is a functional in $V^*$, and to define who $v$ is, we have to say what is the value of $phi(v)(omega)in K$ for every $omegain V^*$. But we know $omega$ is a functional on $V$, so $omega(v)in K$. So we use this fact to define $phi(v)$:




    $$phi(v)(omega)=omega(v)$$




    In the end, we would end writing $phi(v)=v$ so in the end, we get $$v(omega)=omega(v)$$
    and this is why the theorem that states the isomorphism $Vto V^{**}$ is called reflexivity theorem. When you see $v(omega)$, just think it is the same as $omega(v)$.






    share|cite|improve this answer











    $endgroup$



    Taking into account what has been said here, recall that if ${e_1,...,e_n}$ is a basis of $V$ and ${omega^1,...,omega^n}$ its dual basis, then any $Tinmathcal T^{(p,q)}$ can be written:




    begin{equation} T=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} end{equation}




    Think about the simple cases. Tensors of type $(0,1)$ are linear maps $T:Vrightarrow K$, that is, elements of $V^*$. Now, tensors of type $(0,2)$ are usually called bilinear forms. They are multilinear maps $T:Vtimes V rightarrow K$. Now, the easiest way of building a $(0,2)$-tensor is picking two covectors $f_1$ and $f_2$, and use tensorial product:
    $$T=f_1otimes f_2$$
    (note that I wrote sub-indices, there is no difference between writing super-indices, and it's advisable to do it in the general case, but for $mathcal T^{(0,2)}(V)$ it's all right).



    Now, using the first equation, every $(0,2)$-tensor is like this:
    $$T=sum lambda_{ij}  omega^iotimes omega^j$$
    (recall that ${omega^1,...,omega^n})$ is a basis of $V^*$.
    So now you can see that $mathcal T^{(0,2)}(V)=V^*otimes V^*$.



    In the general case, all tensors are sums of multiples of tensors like this:
    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q} qquad qquad (1)$$



    so this is why $mathcal T^{(p,q)}(V)=Votimes overset{p text{ times}}{...} otimes V otimes V^*otimes overset{q text{ times}}{...} otimes V^*$





    Let us focus on tensors like $(1)$.
    If $v_1, ..., v_qin V$ and $xi^1, ... xi^pin V^*$, then



    $$e_{i_1} otimes cdots otimes e_{i_p} otimes omega^{j_1} otimes cdots otimes omega^{j_q}(xi_1, ... xi_q,v_1, ..., v_p,)=e_{i_1}(xi^1)cdots e_{i_p}(xi^p) cdot omega^{j_1}(v_1)cdots omega^{j_q}(v_q)$$



    as I suppose you know. So, in a certain way, you could say that these $omega$'s are waiting for a vector to produce numbers that later will be multiplied.



    Now, forget that you know the elements of $V^*$ are functionals. They now are, simply, vectors, (as they are elements of a vector space), and its dual is $V^{**}=V$. So the $e$'s, considered as functionals over $V^*$ are also waiting for a vector (but now a vector is an element of $V^*$) to produce numbers that will be multiplied.



    The difficulties with this issues is that one has to keep in mind the different points of view. It is true that $V^*$ is the dual of $V$, and the elements of $V^*$ are functionals over $V$. But, as $V^*$ is again a vector space, we can forget for a moment about $V$, considering the elements of $V^*$ as simple vectors, and take the dual $V^{**}$. So, in the end, vectors and covectors behave in really similar ways.



    Regarding the isomorphism $phi: Vto V^{**}$, if we pick $vin V$. So now, $phi(v)$ is a functional in $V^*$, and to define who $v$ is, we have to say what is the value of $phi(v)(omega)in K$ for every $omegain V^*$. But we know $omega$ is a functional on $V$, so $omega(v)in K$. So we use this fact to define $phi(v)$:




    $$phi(v)(omega)=omega(v)$$




    In the end, we would end writing $phi(v)=v$ so in the end, we get $$v(omega)=omega(v)$$
    and this is why the theorem that states the isomorphism $Vto V^{**}$ is called reflexivity theorem. When you see $v(omega)$, just think it is the same as $omega(v)$.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Apr 13 '17 at 12:20









    Community

    1




    1










    answered Feb 12 '17 at 21:54









    A. Salguero-AlarcónA. Salguero-Alarcón

    3,277319




    3,277319












    • $begingroup$
      Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 22:02










    • $begingroup$
      I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 12 '17 at 22:07










    • $begingroup$
      I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53






    • 1




      $begingroup$
      Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53












    • $begingroup$
      Yes, exactly. I'm editing the answer to better explain the second paragraph.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 13 '17 at 21:53


















    • $begingroup$
      Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 22:02










    • $begingroup$
      I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 12 '17 at 22:07










    • $begingroup$
      I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53






    • 1




      $begingroup$
      Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 1:53












    • $begingroup$
      Yes, exactly. I'm editing the answer to better explain the second paragraph.
      $endgroup$
      – A. Salguero-Alarcón
      Feb 13 '17 at 21:53
















    $begingroup$
    Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 22:02




    $begingroup$
    Thank you. Your answer is very good. Please don't assume I know anything - I'm completely self-taught, and it would be very useful to have this great answer edited a notch or two lower in the way of assumptions and explicitly addressing the equations in the OP. One thing I do have read time and time again is the homomorphism between $V$ and $V^{**}$ for finite vector spaces.
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 22:02












    $begingroup$
    I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 12 '17 at 22:07




    $begingroup$
    I will do my best. Perhaps it is easier if you point out which things are unclear to you, so I can write them with more detail or in another way. I assume you need more detail in the isomorphism $V^{**}to V$, so I'm going to add them to the solution.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 12 '17 at 22:07












    $begingroup$
    I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53




    $begingroup$
    I don't follow the second paragraph: "In other words, if $T $has to be applied to $p$ covectors (elements of $V^∗$), we would have to pick $p$ elements of $V^{∗∗}=V$, that is, $p$ vectors. The same works for $q$. If $T$ has to be applied to $q$ vectors, we will need $q$ covectors.".
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53




    1




    1




    $begingroup$
    Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53






    $begingroup$
    Is this what you mean with the first eq. in your answer: $T^p_q,V = underset{p}{underbrace{color{blue}{Votimescdotsotimes V}} }otimes underset{q}{underbrace{color{red}{V^*otimescdotsotimes V^*}}}=sum lambda_{i_1, cdots, i_p}^{j_1, cdots, j_q} color{blue}{underset{text{we belong to V & are waiting for elem. of V}^*}{underbrace{e_{i_1} otimes cdots otimes e_{i_p}}}} otimes color{red}{underset{text{we belong to V}^*text{ waiting for elem. of V}}{underbrace{omega^{j_1} otimes cdots otimes omega^{j_q}}}}?$
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 1:53














    $begingroup$
    Yes, exactly. I'm editing the answer to better explain the second paragraph.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 13 '17 at 21:53




    $begingroup$
    Yes, exactly. I'm editing the answer to better explain the second paragraph.
    $endgroup$
    – A. Salguero-Alarcón
    Feb 13 '17 at 21:53











    2












    $begingroup$

    The space of linear maps $f:Vto k$ is $V^*$. As such, we can show that the space of multilinear maps from $Pi_i^n M_i$ to $k$ will be exactly $M_1^*otimes M_2^*otimesdots otimes M_n^*$, which is exactly what is going on here.



    So the space of maps from $Pi_1^p Vtimes Pi_1^q V^*to k$ would be $bigotimes_1^p V^*otimes bigotimes_1^q V^{**}$



    Since this definition seems to implicitly be working with finite dimensional space $V^{**}$ can be replaced with $V$.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:26








    • 2




      $begingroup$
      The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
      $endgroup$
      – Rasta Mouse
      Feb 10 '17 at 18:40










    • $begingroup$
      OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:44






    • 1




      $begingroup$
      Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
      $endgroup$
      – Rasta Mouse
      Feb 12 '17 at 23:57
















    2












    $begingroup$

    The space of linear maps $f:Vto k$ is $V^*$. As such, we can show that the space of multilinear maps from $Pi_i^n M_i$ to $k$ will be exactly $M_1^*otimes M_2^*otimesdots otimes M_n^*$, which is exactly what is going on here.



    So the space of maps from $Pi_1^p Vtimes Pi_1^q V^*to k$ would be $bigotimes_1^p V^*otimes bigotimes_1^q V^{**}$



    Since this definition seems to implicitly be working with finite dimensional space $V^{**}$ can be replaced with $V$.






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:26








    • 2




      $begingroup$
      The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
      $endgroup$
      – Rasta Mouse
      Feb 10 '17 at 18:40










    • $begingroup$
      OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:44






    • 1




      $begingroup$
      Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
      $endgroup$
      – Rasta Mouse
      Feb 12 '17 at 23:57














    2












    2








    2





    $begingroup$

    The space of linear maps $f:Vto k$ is $V^*$. As such, we can show that the space of multilinear maps from $Pi_i^n M_i$ to $k$ will be exactly $M_1^*otimes M_2^*otimesdots otimes M_n^*$, which is exactly what is going on here.



    So the space of maps from $Pi_1^p Vtimes Pi_1^q V^*to k$ would be $bigotimes_1^p V^*otimes bigotimes_1^q V^{**}$



    Since this definition seems to implicitly be working with finite dimensional space $V^{**}$ can be replaced with $V$.






    share|cite|improve this answer









    $endgroup$



    The space of linear maps $f:Vto k$ is $V^*$. As such, we can show that the space of multilinear maps from $Pi_i^n M_i$ to $k$ will be exactly $M_1^*otimes M_2^*otimesdots otimes M_n^*$, which is exactly what is going on here.



    So the space of maps from $Pi_1^p Vtimes Pi_1^q V^*to k$ would be $bigotimes_1^p V^*otimes bigotimes_1^q V^{**}$



    Since this definition seems to implicitly be working with finite dimensional space $V^{**}$ can be replaced with $V$.







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Feb 10 '17 at 18:15









    Rasta MouseRasta Mouse

    46526




    46526












    • $begingroup$
      Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:26








    • 2




      $begingroup$
      The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
      $endgroup$
      – Rasta Mouse
      Feb 10 '17 at 18:40










    • $begingroup$
      OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:44






    • 1




      $begingroup$
      Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
      $endgroup$
      – Rasta Mouse
      Feb 12 '17 at 23:57


















    • $begingroup$
      Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:26








    • 2




      $begingroup$
      The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
      $endgroup$
      – Rasta Mouse
      Feb 10 '17 at 18:40










    • $begingroup$
      OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
      $endgroup$
      – Antoni Parellada
      Feb 10 '17 at 18:44






    • 1




      $begingroup$
      Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
      $endgroup$
      – Rasta Mouse
      Feb 12 '17 at 23:57
















    $begingroup$
    Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:26






    $begingroup$
    Could we brush aside the fine points and say that in the second line of the tensor space equation in the OP, the $q$ $V^*$ elements are waiting to "eat" a vector, and "spit out" a real number for each one of the $V^*$, and that the vectors that they will be provided are the $q$ elements of $V$? This is probably not so, since it would seem to imply that $p=q$...
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:26






    2




    2




    $begingroup$
    The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
    $endgroup$
    – Rasta Mouse
    Feb 10 '17 at 18:40




    $begingroup$
    The $q$ $V^*$ elemetns in the tensor product defining the tensor space represent the component of the tensor that "eats" $q$ elements of $V$. There doesn't have to be a relationship between $p$ and $q$, I think you're confusing the $V$s in the tensor product definition as arguments of $V^*$, which they are not.
    $endgroup$
    – Rasta Mouse
    Feb 10 '17 at 18:40












    $begingroup$
    OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:44




    $begingroup$
    OK... Half of the problem is solved (?) - the p $V^*times V^*$ are waiting for the p vectors in $Votimes V$ to produce numbers that later are multiplied. So what are the $Vtimes V$ vectors doing?
    $endgroup$
    – Antoni Parellada
    Feb 10 '17 at 18:44




    1




    1




    $begingroup$
    Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
    $endgroup$
    – Rasta Mouse
    Feb 12 '17 at 23:57




    $begingroup$
    Exactly the same thing! Here they acting as $V^{**}$, so they are waiting for arguments in $V^*$.
    $endgroup$
    – Rasta Mouse
    Feb 12 '17 at 23:57











    2












    $begingroup$

    I will probably not add much new, just use a slightly different language.
    A $V^*$-terms in equation $(2)$ for $(p,q)=(0,1)$ clearly represents a linear maps $Vto K$ (in eq $(3)$)). Similarly, $V$ (in equation $(2)$ with $(p,q)=(1,0)$) can be canonically identified with $V^{**}$, elements of which are by definition linear maps $V^*to K$ (that is $(3)$ for $(p,q)=(1,0)$).



    Using the notation common in physics, you can simply assume a fixed basis and represent vectors in coordinates like $v^i:=(v^1,ldots, v^n)$ and duals like $alpha_i:=(alpha_1,ldots,alpha_n)$ and their pairing $alpha_i v^i:=sum_i alpha_i v^iin K$. Then you can ask "what is $v^i$ (a vector!) doing?" Answer: it takes $alpha_i$ and returns $alpha_i v^i$. Similarly, what is $alpha_i$ (a dual!) doing? Answer: it takes $v^i$ and returns $alpha_i v^i$. This construction of course directly generalizes. What is $T^{i_1ldots, i_p}_{j_1ldots j_q}$ doing? It takes $(v_1)^{j_1},ldots, (alpha^p)_{i_p}$ --- in other words, $q$ vectors and $p$ covectors --- and returns the corresponding sum
    $$
    sum_{i_u,j_v} T_{j_1ldots}^{i_1ldots} ,,(v_1)^{j_1}ldots (alpha^p)_{i_p}
    $$
    The reason is that you always want to pair upper- (vector-) indices with lower indices and vice versa.



    Further, you ask




    Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of
    vectors $V$ to produce real numbers that are later multiplied?




    Here you have to be more careful. For convenience, I will restrict myself to $V^*otimes V^*$ as it immediately generalizes. In some sense, you are right and the object $T_{ij}$ (an element of $V^*otimes V^*$) is "waiting" for two vectors $v,w$ on which it will be evaluated to $sum T_{ij} v^i w^j$.



    However, if you have an element of $V^*otimes V^*$, there is no way to identify "two elements" of $V^*$ that it "represents". Note that you have a natural map $V^*times V^*to V^*otimes V^*$ that to two elements $(alpha,beta)$ assigns $alphaotimesbeta$ which acts on vectors via
    $$
    (alphaotimesbeta)(v,w):=alpha(v) beta(w).
    $$
    But




    1. This map is not surjective. For example, a scalar product on $Bbb R^2$ cannot be represented this way (exercise 1 for you). In fact, a tensor is of this form iff the matrix $T_{ij}$ defined above in coordinates, has rank 1 (exercise 2 for you)

    2. This map is even not linear! (assuming the linear structure on $V^*times V^*$ such as $(alpha,beta)+(alpha',beta')=(alpha+alpha', beta+beta')$). Try to verify that $(alphaotimes beta)(v,w)+(alpha'otimesbeta')(v,w)neq (alpha+alpha')otimes (beta+beta')(v,w)$ in general (exercise 3 for you).

    3. This map, however, is multilinear: $(alpha+alpha')otimes beta=alphaotimes beta+alpha'otimes beta$ and similarly in the second component (exercise 4)

    4. There is another, maybe more natural, definition of tensor product that you may like more and that completely avoids the "duals". You can consider $V^*otimes V^*$ to be an abstract vector space freely generated by vectors $alphaotimes beta$ where $alpha,betain V^*$ where you have only the relations $(alpha+alpha')otimes beta=alphaotimes beta + alpha'otimes beta$, $(kalpha)otimes beta=k(alphaotimes beta)$ etc. In other words, you can just get used to these relations, completely ignoring a meaning of these objects. Then you don't need all those duals that bother you. (Exercise 5: show the equivalence of these two approaches)


    Further, I have two general recommendations. Think of a bilinear form, which is a tensor of type $(0,2)$, that is, an element of $V^*otimes V^*$. In coordinates, just a "matrix" $T_{ij}$ that acts on two vectors. Maybe it helps to go around all these texts with this in mind. Maybe try to show (exercise 6) that a bilinear form can be identified with a linear map $Votimes Vto K$ (this is confusing, I know)



    The second recommendation: as this material is very standard and taught at all universities, it is questionable whether the best way to learn it is just from the internet (if you do so, I don't know). Maybe going over a book or some Coursarea course might help.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 23:40






    • 1




      $begingroup$
      @AntoniParellada I tried to add clearer references, hope it is better now
      $endgroup$
      – Peter Franek
      Feb 12 '17 at 23:45










    • $begingroup$
      It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:28






    • 1




      $begingroup$
      @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
      $endgroup$
      – Peter Franek
      Feb 13 '17 at 13:18












    • $begingroup$
      First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 13:32


















    2












    $begingroup$

    I will probably not add much new, just use a slightly different language.
    A $V^*$-terms in equation $(2)$ for $(p,q)=(0,1)$ clearly represents a linear maps $Vto K$ (in eq $(3)$)). Similarly, $V$ (in equation $(2)$ with $(p,q)=(1,0)$) can be canonically identified with $V^{**}$, elements of which are by definition linear maps $V^*to K$ (that is $(3)$ for $(p,q)=(1,0)$).



    Using the notation common in physics, you can simply assume a fixed basis and represent vectors in coordinates like $v^i:=(v^1,ldots, v^n)$ and duals like $alpha_i:=(alpha_1,ldots,alpha_n)$ and their pairing $alpha_i v^i:=sum_i alpha_i v^iin K$. Then you can ask "what is $v^i$ (a vector!) doing?" Answer: it takes $alpha_i$ and returns $alpha_i v^i$. Similarly, what is $alpha_i$ (a dual!) doing? Answer: it takes $v^i$ and returns $alpha_i v^i$. This construction of course directly generalizes. What is $T^{i_1ldots, i_p}_{j_1ldots j_q}$ doing? It takes $(v_1)^{j_1},ldots, (alpha^p)_{i_p}$ --- in other words, $q$ vectors and $p$ covectors --- and returns the corresponding sum
    $$
    sum_{i_u,j_v} T_{j_1ldots}^{i_1ldots} ,,(v_1)^{j_1}ldots (alpha^p)_{i_p}
    $$
    The reason is that you always want to pair upper- (vector-) indices with lower indices and vice versa.



    Further, you ask




    Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of
    vectors $V$ to produce real numbers that are later multiplied?




    Here you have to be more careful. For convenience, I will restrict myself to $V^*otimes V^*$ as it immediately generalizes. In some sense, you are right and the object $T_{ij}$ (an element of $V^*otimes V^*$) is "waiting" for two vectors $v,w$ on which it will be evaluated to $sum T_{ij} v^i w^j$.



    However, if you have an element of $V^*otimes V^*$, there is no way to identify "two elements" of $V^*$ that it "represents". Note that you have a natural map $V^*times V^*to V^*otimes V^*$ that to two elements $(alpha,beta)$ assigns $alphaotimesbeta$ which acts on vectors via
    $$
    (alphaotimesbeta)(v,w):=alpha(v) beta(w).
    $$
    But




    1. This map is not surjective. For example, a scalar product on $Bbb R^2$ cannot be represented this way (exercise 1 for you). In fact, a tensor is of this form iff the matrix $T_{ij}$ defined above in coordinates, has rank 1 (exercise 2 for you)

    2. This map is even not linear! (assuming the linear structure on $V^*times V^*$ such as $(alpha,beta)+(alpha',beta')=(alpha+alpha', beta+beta')$). Try to verify that $(alphaotimes beta)(v,w)+(alpha'otimesbeta')(v,w)neq (alpha+alpha')otimes (beta+beta')(v,w)$ in general (exercise 3 for you).

    3. This map, however, is multilinear: $(alpha+alpha')otimes beta=alphaotimes beta+alpha'otimes beta$ and similarly in the second component (exercise 4)

    4. There is another, maybe more natural, definition of tensor product that you may like more and that completely avoids the "duals". You can consider $V^*otimes V^*$ to be an abstract vector space freely generated by vectors $alphaotimes beta$ where $alpha,betain V^*$ where you have only the relations $(alpha+alpha')otimes beta=alphaotimes beta + alpha'otimes beta$, $(kalpha)otimes beta=k(alphaotimes beta)$ etc. In other words, you can just get used to these relations, completely ignoring a meaning of these objects. Then you don't need all those duals that bother you. (Exercise 5: show the equivalence of these two approaches)


    Further, I have two general recommendations. Think of a bilinear form, which is a tensor of type $(0,2)$, that is, an element of $V^*otimes V^*$. In coordinates, just a "matrix" $T_{ij}$ that acts on two vectors. Maybe it helps to go around all these texts with this in mind. Maybe try to show (exercise 6) that a bilinear form can be identified with a linear map $Votimes Vto K$ (this is confusing, I know)



    The second recommendation: as this material is very standard and taught at all universities, it is questionable whether the best way to learn it is just from the internet (if you do so, I don't know). Maybe going over a book or some Coursarea course might help.






    share|cite|improve this answer











    $endgroup$













    • $begingroup$
      Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 23:40






    • 1




      $begingroup$
      @AntoniParellada I tried to add clearer references, hope it is better now
      $endgroup$
      – Peter Franek
      Feb 12 '17 at 23:45










    • $begingroup$
      It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:28






    • 1




      $begingroup$
      @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
      $endgroup$
      – Peter Franek
      Feb 13 '17 at 13:18












    • $begingroup$
      First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 13:32
















    2












    2








    2





    $begingroup$

    I will probably not add much new, just use a slightly different language.
    A $V^*$-terms in equation $(2)$ for $(p,q)=(0,1)$ clearly represents a linear maps $Vto K$ (in eq $(3)$)). Similarly, $V$ (in equation $(2)$ with $(p,q)=(1,0)$) can be canonically identified with $V^{**}$, elements of which are by definition linear maps $V^*to K$ (that is $(3)$ for $(p,q)=(1,0)$).



    Using the notation common in physics, you can simply assume a fixed basis and represent vectors in coordinates like $v^i:=(v^1,ldots, v^n)$ and duals like $alpha_i:=(alpha_1,ldots,alpha_n)$ and their pairing $alpha_i v^i:=sum_i alpha_i v^iin K$. Then you can ask "what is $v^i$ (a vector!) doing?" Answer: it takes $alpha_i$ and returns $alpha_i v^i$. Similarly, what is $alpha_i$ (a dual!) doing? Answer: it takes $v^i$ and returns $alpha_i v^i$. This construction of course directly generalizes. What is $T^{i_1ldots, i_p}_{j_1ldots j_q}$ doing? It takes $(v_1)^{j_1},ldots, (alpha^p)_{i_p}$ --- in other words, $q$ vectors and $p$ covectors --- and returns the corresponding sum
    $$
    sum_{i_u,j_v} T_{j_1ldots}^{i_1ldots} ,,(v_1)^{j_1}ldots (alpha^p)_{i_p}
    $$
    The reason is that you always want to pair upper- (vector-) indices with lower indices and vice versa.



    Further, you ask




    Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of
    vectors $V$ to produce real numbers that are later multiplied?




    Here you have to be more careful. For convenience, I will restrict myself to $V^*otimes V^*$ as it immediately generalizes. In some sense, you are right and the object $T_{ij}$ (an element of $V^*otimes V^*$) is "waiting" for two vectors $v,w$ on which it will be evaluated to $sum T_{ij} v^i w^j$.



    However, if you have an element of $V^*otimes V^*$, there is no way to identify "two elements" of $V^*$ that it "represents". Note that you have a natural map $V^*times V^*to V^*otimes V^*$ that to two elements $(alpha,beta)$ assigns $alphaotimesbeta$ which acts on vectors via
    $$
    (alphaotimesbeta)(v,w):=alpha(v) beta(w).
    $$
    But




    1. This map is not surjective. For example, a scalar product on $Bbb R^2$ cannot be represented this way (exercise 1 for you). In fact, a tensor is of this form iff the matrix $T_{ij}$ defined above in coordinates, has rank 1 (exercise 2 for you)

    2. This map is even not linear! (assuming the linear structure on $V^*times V^*$ such as $(alpha,beta)+(alpha',beta')=(alpha+alpha', beta+beta')$). Try to verify that $(alphaotimes beta)(v,w)+(alpha'otimesbeta')(v,w)neq (alpha+alpha')otimes (beta+beta')(v,w)$ in general (exercise 3 for you).

    3. This map, however, is multilinear: $(alpha+alpha')otimes beta=alphaotimes beta+alpha'otimes beta$ and similarly in the second component (exercise 4)

    4. There is another, maybe more natural, definition of tensor product that you may like more and that completely avoids the "duals". You can consider $V^*otimes V^*$ to be an abstract vector space freely generated by vectors $alphaotimes beta$ where $alpha,betain V^*$ where you have only the relations $(alpha+alpha')otimes beta=alphaotimes beta + alpha'otimes beta$, $(kalpha)otimes beta=k(alphaotimes beta)$ etc. In other words, you can just get used to these relations, completely ignoring a meaning of these objects. Then you don't need all those duals that bother you. (Exercise 5: show the equivalence of these two approaches)


    Further, I have two general recommendations. Think of a bilinear form, which is a tensor of type $(0,2)$, that is, an element of $V^*otimes V^*$. In coordinates, just a "matrix" $T_{ij}$ that acts on two vectors. Maybe it helps to go around all these texts with this in mind. Maybe try to show (exercise 6) that a bilinear form can be identified with a linear map $Votimes Vto K$ (this is confusing, I know)



    The second recommendation: as this material is very standard and taught at all universities, it is questionable whether the best way to learn it is just from the internet (if you do so, I don't know). Maybe going over a book or some Coursarea course might help.






    share|cite|improve this answer











    $endgroup$



    I will probably not add much new, just use a slightly different language.
    A $V^*$-terms in equation $(2)$ for $(p,q)=(0,1)$ clearly represents a linear maps $Vto K$ (in eq $(3)$)). Similarly, $V$ (in equation $(2)$ with $(p,q)=(1,0)$) can be canonically identified with $V^{**}$, elements of which are by definition linear maps $V^*to K$ (that is $(3)$ for $(p,q)=(1,0)$).



    Using the notation common in physics, you can simply assume a fixed basis and represent vectors in coordinates like $v^i:=(v^1,ldots, v^n)$ and duals like $alpha_i:=(alpha_1,ldots,alpha_n)$ and their pairing $alpha_i v^i:=sum_i alpha_i v^iin K$. Then you can ask "what is $v^i$ (a vector!) doing?" Answer: it takes $alpha_i$ and returns $alpha_i v^i$. Similarly, what is $alpha_i$ (a dual!) doing? Answer: it takes $v^i$ and returns $alpha_i v^i$. This construction of course directly generalizes. What is $T^{i_1ldots, i_p}_{j_1ldots j_q}$ doing? It takes $(v_1)^{j_1},ldots, (alpha^p)_{i_p}$ --- in other words, $q$ vectors and $p$ covectors --- and returns the corresponding sum
    $$
    sum_{i_u,j_v} T_{j_1ldots}^{i_1ldots} ,,(v_1)^{j_1}ldots (alpha^p)_{i_p}
    $$
    The reason is that you always want to pair upper- (vector-) indices with lower indices and vice versa.



    Further, you ask




    Can we say then that the $q$ elements $V^*otimes V^*otimescdots$ in Eq. (2) are linear functionals waiting for the same number of
    vectors $V$ to produce real numbers that are later multiplied?




    Here you have to be more careful. For convenience, I will restrict myself to $V^*otimes V^*$ as it immediately generalizes. In some sense, you are right and the object $T_{ij}$ (an element of $V^*otimes V^*$) is "waiting" for two vectors $v,w$ on which it will be evaluated to $sum T_{ij} v^i w^j$.



    However, if you have an element of $V^*otimes V^*$, there is no way to identify "two elements" of $V^*$ that it "represents". Note that you have a natural map $V^*times V^*to V^*otimes V^*$ that to two elements $(alpha,beta)$ assigns $alphaotimesbeta$ which acts on vectors via
    $$
    (alphaotimesbeta)(v,w):=alpha(v) beta(w).
    $$
    But




    1. This map is not surjective. For example, a scalar product on $Bbb R^2$ cannot be represented this way (exercise 1 for you). In fact, a tensor is of this form iff the matrix $T_{ij}$ defined above in coordinates, has rank 1 (exercise 2 for you)

    2. This map is even not linear! (assuming the linear structure on $V^*times V^*$ such as $(alpha,beta)+(alpha',beta')=(alpha+alpha', beta+beta')$). Try to verify that $(alphaotimes beta)(v,w)+(alpha'otimesbeta')(v,w)neq (alpha+alpha')otimes (beta+beta')(v,w)$ in general (exercise 3 for you).

    3. This map, however, is multilinear: $(alpha+alpha')otimes beta=alphaotimes beta+alpha'otimes beta$ and similarly in the second component (exercise 4)

    4. There is another, maybe more natural, definition of tensor product that you may like more and that completely avoids the "duals". You can consider $V^*otimes V^*$ to be an abstract vector space freely generated by vectors $alphaotimes beta$ where $alpha,betain V^*$ where you have only the relations $(alpha+alpha')otimes beta=alphaotimes beta + alpha'otimes beta$, $(kalpha)otimes beta=k(alphaotimes beta)$ etc. In other words, you can just get used to these relations, completely ignoring a meaning of these objects. Then you don't need all those duals that bother you. (Exercise 5: show the equivalence of these two approaches)


    Further, I have two general recommendations. Think of a bilinear form, which is a tensor of type $(0,2)$, that is, an element of $V^*otimes V^*$. In coordinates, just a "matrix" $T_{ij}$ that acts on two vectors. Maybe it helps to go around all these texts with this in mind. Maybe try to show (exercise 6) that a bilinear form can be identified with a linear map $Votimes Vto K$ (this is confusing, I know)



    The second recommendation: as this material is very standard and taught at all universities, it is questionable whether the best way to learn it is just from the internet (if you do so, I don't know). Maybe going over a book or some Coursarea course might help.







    share|cite|improve this answer














    share|cite|improve this answer



    share|cite|improve this answer








    edited Feb 13 '17 at 14:02

























    answered Feb 12 '17 at 23:31









    Peter FranekPeter Franek

    7,32032036




    7,32032036












    • $begingroup$
      Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 23:40






    • 1




      $begingroup$
      @AntoniParellada I tried to add clearer references, hope it is better now
      $endgroup$
      – Peter Franek
      Feb 12 '17 at 23:45










    • $begingroup$
      It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:28






    • 1




      $begingroup$
      @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
      $endgroup$
      – Peter Franek
      Feb 13 '17 at 13:18












    • $begingroup$
      First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 13:32




















    • $begingroup$
      Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 23:40






    • 1




      $begingroup$
      @AntoniParellada I tried to add clearer references, hope it is better now
      $endgroup$
      – Peter Franek
      Feb 12 '17 at 23:45










    • $begingroup$
      It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 0:28






    • 1




      $begingroup$
      @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
      $endgroup$
      – Peter Franek
      Feb 13 '17 at 13:18












    • $begingroup$
      First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
      $endgroup$
      – Antoni Parellada
      Feb 13 '17 at 13:32


















    $begingroup$
    Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 23:40




    $begingroup$
    Thank you. It is confusing when we move to equation (2) to (3) that where there were $V$'s in eq. (2), there are $V^*$ in eq. (3). So in your first paragraph, are you referring to the $V$'s and $V^*$ in eq.(2) or eq.(3)?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 23:40




    1




    1




    $begingroup$
    @AntoniParellada I tried to add clearer references, hope it is better now
    $endgroup$
    – Peter Franek
    Feb 12 '17 at 23:45




    $begingroup$
    @AntoniParellada I tried to add clearer references, hope it is better now
    $endgroup$
    – Peter Franek
    Feb 12 '17 at 23:45












    $begingroup$
    It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:28




    $begingroup$
    It is indeed much clearer. It was never a problem with your answer - it's a problem with my understanding, and the "finicky" nature of all these indices without unequivocal English names... Thank you for your edit.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 0:28




    1




    1




    $begingroup$
    @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
    $endgroup$
    – Peter Franek
    Feb 13 '17 at 13:18






    $begingroup$
    @AntoniParellada I extended the answer to comment on your further questions; not sure if I clarified something or mystified it even more
    $endgroup$
    – Peter Franek
    Feb 13 '17 at 13:18














    $begingroup$
    First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 13:32






    $begingroup$
    First semester at universities? I'm almost ashamed I asked the question... hahaha It must be in the Czech Republic... At MIT it is more like basic linear algebra... Please let me know if you find a Coursera module with this material.
    $endgroup$
    – Antoni Parellada
    Feb 13 '17 at 13:32













    1












    $begingroup$

    What are all the $(1,0)$ tensors on $V$? They're linear functions from $V^{*}$ to $Bbb R$. It turns out that every such linear function is just "evaluate on an element of $V$". So they correspond, 1-to-1, with elements of $V$. (This doesn't work in infinite dimensions, but for finite dimensions--definitely.)



    What about $(2,0)$ tensors on $V$? They're bilinear functions on $V^{*}$. So they take in two covectors and produce a number. One way to produce such a thing is to pick a pair of vectors $v$ and $w$ and write down



    $$
    T_{v,w}(phi, theta) = (phi(v)) cdot (theta(w))
    $$



    Of course, if you picked the pair $(2v, w/2)$, you'd get the same bilinear function. And also the same bilinear function for $(v/3, 3w)$, etc. In fact, EVERY bilinear function on $V^{*}$ looks like "evaluate on two vectors and take the product of the results" (or a sum of such things).



    So these bilinear functions correspond exactly to elements of $V otimes V$.



    Are you seeing the pattern here?






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
      $endgroup$
      – user4894
      Feb 10 '17 at 18:19






    • 1




      $begingroup$
      I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:22










    • $begingroup$
      @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
      $endgroup$
      – user4894
      Feb 10 '17 at 18:37










    • $begingroup$
      Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:44










    • $begingroup$
      Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 2:23


















    1












    $begingroup$

    What are all the $(1,0)$ tensors on $V$? They're linear functions from $V^{*}$ to $Bbb R$. It turns out that every such linear function is just "evaluate on an element of $V$". So they correspond, 1-to-1, with elements of $V$. (This doesn't work in infinite dimensions, but for finite dimensions--definitely.)



    What about $(2,0)$ tensors on $V$? They're bilinear functions on $V^{*}$. So they take in two covectors and produce a number. One way to produce such a thing is to pick a pair of vectors $v$ and $w$ and write down



    $$
    T_{v,w}(phi, theta) = (phi(v)) cdot (theta(w))
    $$



    Of course, if you picked the pair $(2v, w/2)$, you'd get the same bilinear function. And also the same bilinear function for $(v/3, 3w)$, etc. In fact, EVERY bilinear function on $V^{*}$ looks like "evaluate on two vectors and take the product of the results" (or a sum of such things).



    So these bilinear functions correspond exactly to elements of $V otimes V$.



    Are you seeing the pattern here?






    share|cite|improve this answer









    $endgroup$













    • $begingroup$
      I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
      $endgroup$
      – user4894
      Feb 10 '17 at 18:19






    • 1




      $begingroup$
      I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:22










    • $begingroup$
      @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
      $endgroup$
      – user4894
      Feb 10 '17 at 18:37










    • $begingroup$
      Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:44










    • $begingroup$
      Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 2:23
















    1












    1








    1





    $begingroup$

    What are all the $(1,0)$ tensors on $V$? They're linear functions from $V^{*}$ to $Bbb R$. It turns out that every such linear function is just "evaluate on an element of $V$". So they correspond, 1-to-1, with elements of $V$. (This doesn't work in infinite dimensions, but for finite dimensions--definitely.)



    What about $(2,0)$ tensors on $V$? They're bilinear functions on $V^{*}$. So they take in two covectors and produce a number. One way to produce such a thing is to pick a pair of vectors $v$ and $w$ and write down



    $$
    T_{v,w}(phi, theta) = (phi(v)) cdot (theta(w))
    $$



    Of course, if you picked the pair $(2v, w/2)$, you'd get the same bilinear function. And also the same bilinear function for $(v/3, 3w)$, etc. In fact, EVERY bilinear function on $V^{*}$ looks like "evaluate on two vectors and take the product of the results" (or a sum of such things).



    So these bilinear functions correspond exactly to elements of $V otimes V$.



    Are you seeing the pattern here?






    share|cite|improve this answer









    $endgroup$



    What are all the $(1,0)$ tensors on $V$? They're linear functions from $V^{*}$ to $Bbb R$. It turns out that every such linear function is just "evaluate on an element of $V$". So they correspond, 1-to-1, with elements of $V$. (This doesn't work in infinite dimensions, but for finite dimensions--definitely.)



    What about $(2,0)$ tensors on $V$? They're bilinear functions on $V^{*}$. So they take in two covectors and produce a number. One way to produce such a thing is to pick a pair of vectors $v$ and $w$ and write down



    $$
    T_{v,w}(phi, theta) = (phi(v)) cdot (theta(w))
    $$



    Of course, if you picked the pair $(2v, w/2)$, you'd get the same bilinear function. And also the same bilinear function for $(v/3, 3w)$, etc. In fact, EVERY bilinear function on $V^{*}$ looks like "evaluate on two vectors and take the product of the results" (or a sum of such things).



    So these bilinear functions correspond exactly to elements of $V otimes V$.



    Are you seeing the pattern here?







    share|cite|improve this answer












    share|cite|improve this answer



    share|cite|improve this answer










    answered Feb 10 '17 at 18:13









    John HughesJohn Hughes

    65.4k24293




    65.4k24293












    • $begingroup$
      I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
      $endgroup$
      – user4894
      Feb 10 '17 at 18:19






    • 1




      $begingroup$
      I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:22










    • $begingroup$
      @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
      $endgroup$
      – user4894
      Feb 10 '17 at 18:37










    • $begingroup$
      Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:44










    • $begingroup$
      Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 2:23




















    • $begingroup$
      I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
      $endgroup$
      – user4894
      Feb 10 '17 at 18:19






    • 1




      $begingroup$
      I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:22










    • $begingroup$
      @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
      $endgroup$
      – user4894
      Feb 10 '17 at 18:37










    • $begingroup$
      Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
      $endgroup$
      – John Hughes
      Feb 10 '17 at 18:44










    • $begingroup$
      Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
      $endgroup$
      – Antoni Parellada
      Feb 12 '17 at 2:23


















    $begingroup$
    I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
    $endgroup$
    – user4894
    Feb 10 '17 at 18:19




    $begingroup$
    I've always been confused between this point of view and the construction of the tensor product in abstract algebra. In the algebraic construction, the dual spaces don't come up at all. But in physics applications, all anyone cares about is the dual space. What is the missing link to connect these two ideas?
    $endgroup$
    – user4894
    Feb 10 '17 at 18:19




    1




    1




    $begingroup$
    I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:22




    $begingroup$
    I think you're mistaken about the algebra construction: it can be applied to arbitrary vector spaces, and in differential geometry, many of these end up being dual spaces. As for the relationship to physics...that's outside my range of expertise, although I like to think of dual-vectors as "measurements": a dual vector takes some concrete thing (like a time interval, or a hunk of material) and associates to it a real number (the length of time, the actual mass of the material, etc.). Once you think that way, "dual"s seem pretty darned natural.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:22












    $begingroup$
    @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
    $endgroup$
    – user4894
    Feb 10 '17 at 18:37




    $begingroup$
    @JohnHughes Thanks, the idea of a functional as a measurement is helpful.
    $endgroup$
    – user4894
    Feb 10 '17 at 18:37












    $begingroup$
    Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:44




    $begingroup$
    Sorry: I use "vector" to refer to things in $V$ and "covector" to refer to things in $V^{*}$. I should have mentioned that, as it's not universal by any means.
    $endgroup$
    – John Hughes
    Feb 10 '17 at 18:44












    $begingroup$
    Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 2:23






    $begingroup$
    Let me see... Your answer focuses on the meaning of the $V$s in Eq.(1): $T: underset{p}{underbrace{V^*times cdots times V^*}}times underset{q}{underbrace{Vtimestimes cdots Vtimes V}} overset{sim}rightarrow K$, not the $V^*$'s, correct? And you are really saying that $Voverset{sim}{=}V^{**}$, right?
    $endgroup$
    – Antoni Parellada
    Feb 12 '17 at 2:23













    1












    $begingroup$

    [This will never be the accepted answer, so no worries.]



    I found in Introduction to Vectors and Tensors: Linear and Multilinear Algebra by Ray M. Bowen and C.-C. Wang the following exposition of the functions in the OP to be very enlightening, and I'm writing it here with minimal modifications:



    Step 1:



    If $mathbf{v}in mathscr V$ and $mathbf{v^*}in mathscr V^*$, their scalar product can be computed in scalar form as follows: Choose a basis ${mathbf{e}_i}$ and its dual basis ${mathbf{e}^i}$ for $mathscr V$ and $mathscr V^*$, respectively so we can express $mathbf{v}$ and $mathbf{v}^*$ in component form. Then we have



    $$left<mathbf v^*, mathbf vright>=left<v_i;mathbf e^i, v^j ; mathbf e_jright>=v_i,v^j left<mathbf e^i, mathbf e_jright>= v_i,v^j; delta_j^i=v_i,v^i$$



    Step 2:



    If $mathscr V_1,cdots,mathscr V_s$ is a collection of vector spaces, then a $s$-linear function (multilinear function) is a function



    $$mathbf A: mathscr V_1,cdots,mathscr V_srightarrow mathbb R$$



    that is linear in each of its variables while the other variables are held constant. If the vector spaces $mathscr V_1,cdots,mathscr V_s$ are the vector space $mathscr V$ OR its dual space $mathscr V^*,$ then $mathbf A$ is called a TENSOR.



    More specifically, a tensor of order $(p,q)$ on $mathscr V$, where $p$ and $q$ are positive integers, is a linear function



    $$bbox[5px,border:2px solid aqua]{T^p_q,V equiv underset{ptext{ times}}{underbrace{mathscr V^*timescdotstimes mathscr V^*}}times underset{qtext{ times}}{underbrace{mathscr Vtimescdotstimes mathscr V}} rightarrow mathbb R}$$



    Step 3:



    If $mathbf v$ is a vector in $mathscr V$ and $mathbf v^*$ is a covector in $mathscr V^*$, then we define the function



    $$mathbf v ,otimes, mathbf v^*: mathscr V^* times mathscr V rightarrow mathbb R$$



    by



    $$mathbf v ,otimes, mathbf v^*,(mathbf u^*,,,mathbf u)equiv left<mathbf u^*,mathbf vright>left<mathbf v^*,mathbf uright>$$



    for all $mathbf u^*in mathscr V^*, mathbf uin mathscr V.$ Clearly $mathbf v ,otimes, mathbf v^*$ is a bilinear function, so $mathbf v ,otimes, mathbf v^*in T^1_1(mathscr V).$



    Step 4:



    The tensor product can be defined for arbitrary number of vectors and covectors. Let $mathbf v_1,cdots,mathbf v_p$ be vectors in $mathscr V$ and $mathbf v^1, cdots, mathbf v^q$ be covectors in $mathscr V^*.$ Then we define the function



    $$mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q:mathscr V^*,times,cdots,timesmathscr V^*,timesmathscr V,timescdots,times,mathscr Vrightarrow mathbb R$$



    by



    $$bbox[5px,border:2px solid aqua]{begin{align}&mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q,left(mathbf u^1,cdots,mathbf u^p,mathbf u_1,cdots,mathbf u_qright)\[2ex]
    &equivleft<mathbf u^1,mathbf v_1 right>,cdots left<mathbf u^p,mathbf v_p right>,left<mathbf v^1,mathbf u_1 right>,cdotsleft<mathbf v^q,mathbf u_q right>,
    end{align}}$$



    for all $mathbf u_1,cdots,mathbf u_qin mathscr V$ and $mathbf u^1,cdots,mathbf u^qin mathscr V^*.$ Clearly this function is $(p+q)$-linear, so that



    $$mathbf v_1 otimes cdotsotimesmathbf v_potimesmathbf V^1otimescdotsotimes mathbf v^q in T^p_q(mathscr V)$$



    is called the TENSOR PRODUCT of $mathbf v_1,cdots,mathbf v_p$ and $mathbf v^1,cdots,mathbf v^q.$






    share|cite|improve this answer











    $endgroup$


















      1












      $begingroup$

      [This will never be the accepted answer, so no worries.]



      I found in Introduction to Vectors and Tensors: Linear and Multilinear Algebra by Ray M. Bowen and C.-C. Wang the following exposition of the functions in the OP to be very enlightening, and I'm writing it here with minimal modifications:



      Step 1:



      If $mathbf{v}in mathscr V$ and $mathbf{v^*}in mathscr V^*$, their scalar product can be computed in scalar form as follows: Choose a basis ${mathbf{e}_i}$ and its dual basis ${mathbf{e}^i}$ for $mathscr V$ and $mathscr V^*$, respectively so we can express $mathbf{v}$ and $mathbf{v}^*$ in component form. Then we have



      $$left<mathbf v^*, mathbf vright>=left<v_i;mathbf e^i, v^j ; mathbf e_jright>=v_i,v^j left<mathbf e^i, mathbf e_jright>= v_i,v^j; delta_j^i=v_i,v^i$$



      Step 2:



      If $mathscr V_1,cdots,mathscr V_s$ is a collection of vector spaces, then a $s$-linear function (multilinear function) is a function



      $$mathbf A: mathscr V_1,cdots,mathscr V_srightarrow mathbb R$$



      that is linear in each of its variables while the other variables are held constant. If the vector spaces $mathscr V_1,cdots,mathscr V_s$ are the vector space $mathscr V$ OR its dual space $mathscr V^*,$ then $mathbf A$ is called a TENSOR.



      More specifically, a tensor of order $(p,q)$ on $mathscr V$, where $p$ and $q$ are positive integers, is a linear function



      $$bbox[5px,border:2px solid aqua]{T^p_q,V equiv underset{ptext{ times}}{underbrace{mathscr V^*timescdotstimes mathscr V^*}}times underset{qtext{ times}}{underbrace{mathscr Vtimescdotstimes mathscr V}} rightarrow mathbb R}$$



      Step 3:



      If $mathbf v$ is a vector in $mathscr V$ and $mathbf v^*$ is a covector in $mathscr V^*$, then we define the function



      $$mathbf v ,otimes, mathbf v^*: mathscr V^* times mathscr V rightarrow mathbb R$$



      by



      $$mathbf v ,otimes, mathbf v^*,(mathbf u^*,,,mathbf u)equiv left<mathbf u^*,mathbf vright>left<mathbf v^*,mathbf uright>$$



      for all $mathbf u^*in mathscr V^*, mathbf uin mathscr V.$ Clearly $mathbf v ,otimes, mathbf v^*$ is a bilinear function, so $mathbf v ,otimes, mathbf v^*in T^1_1(mathscr V).$



      Step 4:



      The tensor product can be defined for arbitrary number of vectors and covectors. Let $mathbf v_1,cdots,mathbf v_p$ be vectors in $mathscr V$ and $mathbf v^1, cdots, mathbf v^q$ be covectors in $mathscr V^*.$ Then we define the function



      $$mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q:mathscr V^*,times,cdots,timesmathscr V^*,timesmathscr V,timescdots,times,mathscr Vrightarrow mathbb R$$



      by



      $$bbox[5px,border:2px solid aqua]{begin{align}&mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q,left(mathbf u^1,cdots,mathbf u^p,mathbf u_1,cdots,mathbf u_qright)\[2ex]
      &equivleft<mathbf u^1,mathbf v_1 right>,cdots left<mathbf u^p,mathbf v_p right>,left<mathbf v^1,mathbf u_1 right>,cdotsleft<mathbf v^q,mathbf u_q right>,
      end{align}}$$



      for all $mathbf u_1,cdots,mathbf u_qin mathscr V$ and $mathbf u^1,cdots,mathbf u^qin mathscr V^*.$ Clearly this function is $(p+q)$-linear, so that



      $$mathbf v_1 otimes cdotsotimesmathbf v_potimesmathbf V^1otimescdotsotimes mathbf v^q in T^p_q(mathscr V)$$



      is called the TENSOR PRODUCT of $mathbf v_1,cdots,mathbf v_p$ and $mathbf v^1,cdots,mathbf v^q.$






      share|cite|improve this answer











      $endgroup$
















        1












        1








        1





        $begingroup$

        [This will never be the accepted answer, so no worries.]



        I found in Introduction to Vectors and Tensors: Linear and Multilinear Algebra by Ray M. Bowen and C.-C. Wang the following exposition of the functions in the OP to be very enlightening, and I'm writing it here with minimal modifications:



        Step 1:



        If $mathbf{v}in mathscr V$ and $mathbf{v^*}in mathscr V^*$, their scalar product can be computed in scalar form as follows: Choose a basis ${mathbf{e}_i}$ and its dual basis ${mathbf{e}^i}$ for $mathscr V$ and $mathscr V^*$, respectively so we can express $mathbf{v}$ and $mathbf{v}^*$ in component form. Then we have



        $$left<mathbf v^*, mathbf vright>=left<v_i;mathbf e^i, v^j ; mathbf e_jright>=v_i,v^j left<mathbf e^i, mathbf e_jright>= v_i,v^j; delta_j^i=v_i,v^i$$



        Step 2:



        If $mathscr V_1,cdots,mathscr V_s$ is a collection of vector spaces, then a $s$-linear function (multilinear function) is a function



        $$mathbf A: mathscr V_1,cdots,mathscr V_srightarrow mathbb R$$



        that is linear in each of its variables while the other variables are held constant. If the vector spaces $mathscr V_1,cdots,mathscr V_s$ are the vector space $mathscr V$ OR its dual space $mathscr V^*,$ then $mathbf A$ is called a TENSOR.



        More specifically, a tensor of order $(p,q)$ on $mathscr V$, where $p$ and $q$ are positive integers, is a linear function



        $$bbox[5px,border:2px solid aqua]{T^p_q,V equiv underset{ptext{ times}}{underbrace{mathscr V^*timescdotstimes mathscr V^*}}times underset{qtext{ times}}{underbrace{mathscr Vtimescdotstimes mathscr V}} rightarrow mathbb R}$$



        Step 3:



        If $mathbf v$ is a vector in $mathscr V$ and $mathbf v^*$ is a covector in $mathscr V^*$, then we define the function



        $$mathbf v ,otimes, mathbf v^*: mathscr V^* times mathscr V rightarrow mathbb R$$



        by



        $$mathbf v ,otimes, mathbf v^*,(mathbf u^*,,,mathbf u)equiv left<mathbf u^*,mathbf vright>left<mathbf v^*,mathbf uright>$$



        for all $mathbf u^*in mathscr V^*, mathbf uin mathscr V.$ Clearly $mathbf v ,otimes, mathbf v^*$ is a bilinear function, so $mathbf v ,otimes, mathbf v^*in T^1_1(mathscr V).$



        Step 4:



        The tensor product can be defined for arbitrary number of vectors and covectors. Let $mathbf v_1,cdots,mathbf v_p$ be vectors in $mathscr V$ and $mathbf v^1, cdots, mathbf v^q$ be covectors in $mathscr V^*.$ Then we define the function



        $$mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q:mathscr V^*,times,cdots,timesmathscr V^*,timesmathscr V,timescdots,times,mathscr Vrightarrow mathbb R$$



        by



        $$bbox[5px,border:2px solid aqua]{begin{align}&mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q,left(mathbf u^1,cdots,mathbf u^p,mathbf u_1,cdots,mathbf u_qright)\[2ex]
        &equivleft<mathbf u^1,mathbf v_1 right>,cdots left<mathbf u^p,mathbf v_p right>,left<mathbf v^1,mathbf u_1 right>,cdotsleft<mathbf v^q,mathbf u_q right>,
        end{align}}$$



        for all $mathbf u_1,cdots,mathbf u_qin mathscr V$ and $mathbf u^1,cdots,mathbf u^qin mathscr V^*.$ Clearly this function is $(p+q)$-linear, so that



        $$mathbf v_1 otimes cdotsotimesmathbf v_potimesmathbf V^1otimescdotsotimes mathbf v^q in T^p_q(mathscr V)$$



        is called the TENSOR PRODUCT of $mathbf v_1,cdots,mathbf v_p$ and $mathbf v^1,cdots,mathbf v^q.$






        share|cite|improve this answer











        $endgroup$



        [This will never be the accepted answer, so no worries.]



        I found in Introduction to Vectors and Tensors: Linear and Multilinear Algebra by Ray M. Bowen and C.-C. Wang the following exposition of the functions in the OP to be very enlightening, and I'm writing it here with minimal modifications:



        Step 1:



        If $mathbf{v}in mathscr V$ and $mathbf{v^*}in mathscr V^*$, their scalar product can be computed in scalar form as follows: Choose a basis ${mathbf{e}_i}$ and its dual basis ${mathbf{e}^i}$ for $mathscr V$ and $mathscr V^*$, respectively so we can express $mathbf{v}$ and $mathbf{v}^*$ in component form. Then we have



        $$left<mathbf v^*, mathbf vright>=left<v_i;mathbf e^i, v^j ; mathbf e_jright>=v_i,v^j left<mathbf e^i, mathbf e_jright>= v_i,v^j; delta_j^i=v_i,v^i$$



        Step 2:



        If $mathscr V_1,cdots,mathscr V_s$ is a collection of vector spaces, then a $s$-linear function (multilinear function) is a function



        $$mathbf A: mathscr V_1,cdots,mathscr V_srightarrow mathbb R$$



        that is linear in each of its variables while the other variables are held constant. If the vector spaces $mathscr V_1,cdots,mathscr V_s$ are the vector space $mathscr V$ OR its dual space $mathscr V^*,$ then $mathbf A$ is called a TENSOR.



        More specifically, a tensor of order $(p,q)$ on $mathscr V$, where $p$ and $q$ are positive integers, is a linear function



        $$bbox[5px,border:2px solid aqua]{T^p_q,V equiv underset{ptext{ times}}{underbrace{mathscr V^*timescdotstimes mathscr V^*}}times underset{qtext{ times}}{underbrace{mathscr Vtimescdotstimes mathscr V}} rightarrow mathbb R}$$



        Step 3:



        If $mathbf v$ is a vector in $mathscr V$ and $mathbf v^*$ is a covector in $mathscr V^*$, then we define the function



        $$mathbf v ,otimes, mathbf v^*: mathscr V^* times mathscr V rightarrow mathbb R$$



        by



        $$mathbf v ,otimes, mathbf v^*,(mathbf u^*,,,mathbf u)equiv left<mathbf u^*,mathbf vright>left<mathbf v^*,mathbf uright>$$



        for all $mathbf u^*in mathscr V^*, mathbf uin mathscr V.$ Clearly $mathbf v ,otimes, mathbf v^*$ is a bilinear function, so $mathbf v ,otimes, mathbf v^*in T^1_1(mathscr V).$



        Step 4:



        The tensor product can be defined for arbitrary number of vectors and covectors. Let $mathbf v_1,cdots,mathbf v_p$ be vectors in $mathscr V$ and $mathbf v^1, cdots, mathbf v^q$ be covectors in $mathscr V^*.$ Then we define the function



        $$mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q:mathscr V^*,times,cdots,timesmathscr V^*,timesmathscr V,timescdots,times,mathscr Vrightarrow mathbb R$$



        by



        $$bbox[5px,border:2px solid aqua]{begin{align}&mathbf v_1,otimescdotsotimes,mathbf v_p,otimesmathbf v^1cdotsotimes,mathbf v^q,left(mathbf u^1,cdots,mathbf u^p,mathbf u_1,cdots,mathbf u_qright)\[2ex]
        &equivleft<mathbf u^1,mathbf v_1 right>,cdots left<mathbf u^p,mathbf v_p right>,left<mathbf v^1,mathbf u_1 right>,cdotsleft<mathbf v^q,mathbf u_q right>,
        end{align}}$$



        for all $mathbf u_1,cdots,mathbf u_qin mathscr V$ and $mathbf u^1,cdots,mathbf u^qin mathscr V^*.$ Clearly this function is $(p+q)$-linear, so that



        $$mathbf v_1 otimes cdotsotimesmathbf v_potimesmathbf V^1otimescdotsotimes mathbf v^q in T^p_q(mathscr V)$$



        is called the TENSOR PRODUCT of $mathbf v_1,cdots,mathbf v_p$ and $mathbf v^1,cdots,mathbf v^q.$







        share|cite|improve this answer














        share|cite|improve this answer



        share|cite|improve this answer








        edited Jan 6 at 23:04

























        answered Feb 12 '17 at 23:11









        Antoni ParelladaAntoni Parellada

        3,10421341




        3,10421341






























            draft saved

            draft discarded




















































            Thanks for contributing an answer to Mathematics Stack Exchange!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid



            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.


            Use MathJax to format equations. MathJax reference.


            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f2138459%2funderstanding-the-definition-of-tensors-as-multilinear-maps%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Probability when a professor distributes a quiz and homework assignment to a class of n students.

            Aardman Animations

            Are they similar matrix