Comments on What is the unit element in the space of more-than-2 indexed tensors?
Post
What is the unit element in the space of more-than-2 indexed tensors?
For the 2-index tensors, we have the unit matrix, or eye matrix ($M_{ab}=\delta_{ab}$).
What is the case for more index tensors?
Group operation would be that we consider the tensors as linear operations, and the multipliciation would be the concatenated execution of the operation.
On a formalized way, for example for 3-index tensors, we are searching the 3-index tensor $M_{abc} \in \mathbb{R}^{3 \times 3 \times 3}$, for which
$$\forall {M_{a'b'c'} \in \mathbb{R}^{3 \times 3 \times 3}}: M_{abc} \cdot M_{a'b'c'} = M_{a'b'c'}$$The reason we can view order-2 tensors as matrices is because $\mathrm{Lin}(U,V)\cong\mathrm{Lin}(U\otimes V^*,\mathbb R)$, and we can view an element of $\mathrm{Lin}(U\otimes V^*,\mathbb R)\cong\mathrm{MultiLin}(U,V^*;\mathbb R)$ as an order-2 tensor, i.e. a multilinear form. The dual vector space is why we would usually use both a raised and lowered index in this case. For higher order tensors, there are multiple ways you could distribute the inputs to the multilinear form. $\mathrm{MultiLin}(U,V,W;\mathbb R)\cong\mathrm{Lin}(U\otimes V,W^*)\cong\mathrm{Lin}(U,V^*\otimes W^*)$ and more. If $U=V=W$, these typically won't compose even if we (non-canonically) identify $U^*$ and $U$. So it's unclear what this "group operation" would be.
As a minor nit, matrices/linear transformations don't form a group under matrix multiplication/composition, though they do under addition, only invertible matrices/linear transformations do.
O.k, then only in the invertible tensors. For $Lin(U,V)$ I think it might mean probably a group of the linear operations (?), but the upper/lower indexes were above me. What if we consider the tensor as an n-dimensional number array? What I see here, they really do not group. As I see it, the multiplication of two 3-index tensors is probably a 6, 4, 2 or 0-index tensor.
What is the case for 4-index tensors? I think a formula might be $(MN){abcd}=\sum{ij}M_{abij}N_{ijcd}$, resulting a group (only for invertible or non-zero determinant tensors, although at the moment I have no idea, what could be the determinant of a >2 index tensor).
The reason I said the fact that composition of linear transformations don't form a group was a "minor nit" was because we can talk about a unit to composition without inverses. In technical terms, linear transformations from and to the same vector space form a monoid. More generally, vector spaces and linear transformations form a category, and you could just ask what the identities are of that category. None of this requires inverses.
The big problem with your question is that you're asking for the unit of a "group operation" that you haven't defined, and - I'm suggesting - doesn't exist in the way that you want.
What you can talk about is tensor contraction (and various other much less common operations that seem even further from what you want). This corresponds to the fact that the identity linear transformation in $\mathrm{Lin}(V,V)$ gives rise to a linear transformation $\mathrm{Lin}(V\otimes V^*,\mathbb R)$. This also corresponds exactly to the Kronecker delta you've already mentioned. If you have $U^*\otimes V$ and $V^*\otimes W$, you can tensor them together to get $U^*\otimes V\otimes V^*\otimes W$ and then map that tensor to $U^*\otimes W$. Viewing $U^*\otimes V$ as $\mathrm{Lin}(U,V)$ and similarly for $V^*\otimes W$, tensor contraction in this case is exactly composition of linear transformations.
What this corresponds to in index notation is summing over repeated indices where one is lowered and the other raised, i.e. exactly what Einstein convention covers. Since $U$, $V$, and $W$ can all themselves be tensors, this corresponds to summing over multiple pairs of raised and lowered indices with arbitrarily many extra raised and lowered indices that remain in the final tensor. None of this is specific to any order and typically isn't a closed operation on tensors of a given order.
Summing over repeated indices that are both lowered or both raised corresponds to implicitly inserting a raising/lowering tensor. For finite dimensional spaces, this corresponds to the inner product since if we view the inner product as a function that takes one vector and returns a function expecting a second, it is an embedding $\mathrm{Lin}(V, V^*)$. In finite dimensional cases, this embedding is an isomorphism, albeit not naturally so in a technical sense.

This community is part of the non-profit Codidact network. We have other communities too — take a look!
You can also join us in chat!
Want to advertise this community? Use our templates!
Like what we're doing? Support us! Donate
1 comment thread