Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Comments on What does upper indices represent?

Parent

What does upper indices represent?

+1
−0

I saw that people were representing matrices in two ways.

  1. $$\sum_{j=1}^n a_{ij}$$

It is representing a column matrix (vector actually) if we assume $i=1$.

$$\begin{bmatrix}a_{11} & a_{12} & a_{13} & ......\end{bmatrix}$$

  1. $$\sum_{j=1}^n a^{ij}$$

What is it representing? At first, I thought it was a row matrix (vector) since it is the opposite of a column matrix (vector). But when I was writing the question I couldn't generate a row matrix using the 2nd equation. I became more confused when I saw $a_j^{i}$, and sometimes there are two variables in sub and sup: $a_{ji}^{kl}$. I don't remember if either of them matches (I am not sure if I wrote it the wrong way).

After searching a little bit I found that when we move components our vectors don't change. But, I can't get deeper into covariant and contravariant. I even saw some people use an equation like this: $^ia_j$.

I was reading https://physics.stackexchange.com/q/541822/, and those answers don't explain the covariant and contravariant for a beginner (those explanations are for those who have some knowledge of the covariant and contravariant).

I was watching the video, what he said that is, if we take some basis vectors and then find a vector using those basis vectors than if we decrease length of those vectors than that's contravariant vectors (I think he meant to say changing those components). But the explanation is not much more good to me. He might be correct also but I don't have any idea. If he is assuming that changes of basis vectors is contravariant then is "the original" basis vectors covariant? So how do we deal with covariant and contravariant altogether $g^i_j$ sometimes $g_j^i$

History
Why does this post require moderator attention?
You might want to add some details to your flag.
Why should this post be closed?

2 comment threads

Wrong site? (2 comments)
Needs more context (2 comments)
Post
+0
−2

In general sense, convariant and contravariant isn't a interesting thing. But what confuses here that is the meaning of those words.

Actually, if we think of two different basic vectors than a basis vector will be covariant another will be contravariant. But a basis vector must be smaller than another basis vector. Let we have a vector which looks like this : $\vec{A}=2\vec{e_1}+2\vec{e_2}$. Here $\vec{e_1}$ and $\vec{e_2}$ is basis vectors. I am taking two e_1 and e_2 to make a vector line. Now I am going to decrease length of those basis vectors let $\tilde{\vec{e_1}}=\frac{1}{2}\vec{e_1}$ and $\tilde{\vec{e_2}}=\frac{1}{2}\vec{e_2}$. To make that $\vec{A}$ using these new basis vectors I must increase coefficient of $\vec{e_1}$ and $\vec{e_2}$. Systematically, that vector should be $\vec{A}=4\tilde{\vec{e_1}}+4\tilde{\vec{e_2}}$. So that I can find the same vector this way. But here we can write $\tilde{\vec{e_1}}=e^1$ (It's superscript not exponent). And $e_1$ is called covariant and $e^1$ is called contravariant.

We usually use the method to define spacetime from different parts. There's possible way to transform (It's not the one which I had seen but it describes little bit) them also.

When using contravariant and covariant in a single term that represent that, we are trying to represent a vector using different basis. Like as, we are going to define plain line respectively through x,y and z axis $e_1$, $e_2$ and $e_3$. Now I am going to transform them rather than decreasing their length and I am call those transformed basis vector as $e^1$, $e^2$ and $e^3$. To find a curvy vector we must use those transformed and normal basis vectors together (we can call that vector by normal basis vectors but it will be easier to find that vector by mixing all basis vectors) $\vec{A}=g^1_2+g^3_1$. For finding any kind of vector we use i,j,k respectively for 3 dimensional coordinate. We can take another vector $\vec{B}=\sum_{ijk} g_i^{jk}+g_k^{ij}$. But Einstein said that we are summing them every time so we can remove the summation expression from that equation but that doesn't mean we aren't summing we are summing by default. So the same vector will look like, $\vec{B}=g_i^{jk}+g_k^{ij}$.

History
Why does this post require moderator attention?
You might want to add some details to your flag.

1 comment thread

Please correct me if I am wrong?! (5 comments)
Please correct me if I am wrong?!
deleted user wrote over 2 years ago · edited over 2 years ago

I am just learning I didn't join any lecture. All the things I learned, from lots of internet searching, Wikipedia, PSE, MSE and finally youtube. So I can simply be wrong (I don't care of your voting but just a comment why I am wrong)

Derek Elkins‭ wrote over 2 years ago

There are several issues here. First, you call everything a "vector". What the notation refers to are tensors and vectors are just the rank-1 case. Anything with other than 1 index is NOT a vector. Your description of the Einstein summation convention is completely wrong. It is not a convention to just sum over all the indexes all the time. Instead, the rule is: if we have an index that occurs both raised and lowered (once each) in a single term, then we will sum over that index in that term. Here "term" is being used in the precise sense as an argument to addition. Effectively a term will not include any uses of addition itself. Let's look at your example: $g_i^{jk} + g_k^{ij}$. First, $j$ does not occur lowered so we wouldn't sum over it. Next, the terms of this expression are $g_i^{jk}$ and $g_k^{ij}$ neither of which contain multiple occurrences of an index and so would not be summed over. Thus no sum at all would occur.

Derek Elkins‭ wrote over 2 years ago

Slightly less objectively, your wording and notation compounded by your inconsistent use and unfortunate choices of notation confuse the issue significantly. A key part of this discussion is that all this co-/contra-variant stuff has to do with how we represent vectors (and tensors generally) with respect to a coordinate system and frame of basis vectors. The vector/tensor just is and it doesn't make sense to say it's co- or contra-variant. What's co- or contra-variant is the collection of numbers we use to describe the vector. To this end, in traditional tensor notation every indexed expression is a scalar. $v_i$ is a scalar for each value of $i$. The vector is represented by the whole collection ${v_i}_{i=1}^2$, and this collection of numbers only identifies a vector relative to some implied basis. We should be talking about representations which transform co-/contra-variantly.

Derek Elkins‭ wrote over 2 years ago

Finally, there is some context which isn't technically required but omitting it sets up confusion down the line. First, most discussions of tensors by physicists are actually discussions of tensor fields. A tensor field is a continuous (and usually smooth) assignment of tensors to points on a manifold. For example, the assignment of wind speed and direction at each point on the Earth would define a vector field on the Earth. At each point on the Earth, we'd have a vector corresponding to the wind speed and direction at that point. Discussions of co-/contra-variance involving derivatives are talking about tensor fields. There's also the very different notion of co-/contra-variance from category theory, but it has some connections to this notion of co-/contra-variance via the dual space functor. However, it takes a bit of care to separate how they are and are not related. For example, the (categorical) contravariance of the dual space functor has nothing to do with representations.

deleted user wrote over 2 years ago

Derek Elkins‭ Wouldn't you like to write an answer?