Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

71%
+3 −0
#3: Post edited by user avatar r~~‭ · 2021-10-10T18:17:17Z (about 3 years ago)
Clarify ‘abstract vector’ and tighten up some math expressions
  • I suspect what's tripping you up here is the difference between applying a linear transformation to a vector versus applying a change of basis. Both involve matrix multiplication, but the nature of the transformation is different.
  • You can't use matrices to represent abstract vectors without implicitly using some basis. When we write that a vector $\vec{v} = \begin{bmatrix}v_1\\\\v_2\end{bmatrix}$, what we are actually saying is that $\vec{v} = v_1\vec{e_1} + v_2\vec{e_2}$, where $\vec{e_i}$ are the elements of our implicit basis. We can abuse notation a little bit and write this as $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix}$. (All this can be made rigorous without too much more work, but I'm trying to keep this as simple as I can.)
  • Now suppose you want to represent $\vec{v}$ in another basis $\vec{b_i}$. $\vec{v}$ remains the same vector, so you want to find $w_i$ such that $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix} = \begin{bmatrix}\vec{b_1} & \vec{b_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$. If you know the $\vec{b_i}$ in terms of the $\vec{e_i}$, then you can expand them—if $\vec{b_i} = b_{i1}\vec{e_1} + b_{i2}\vec{e_2}$, then:
  • $$
  • \begin{align}
  • \vec{v} &= \begin{bmatrix}b_{11}\vec{e_1} + b_{12}\vec{e_2} & b_{21}\vec{e_1} + b_{22}\vec{e_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix} \\\\
  • &= \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}b_{11} & b_{21}\\\\b_{12} & b_{22}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}
  • \end{align}
  • $$
  • That middle matrix is the change of basis matrix. Notice that its columns are the basis vectors $\vec{b_i}$ when expressed in the standard basis. When you use a matrix to transform a vector, you multiply the vector to the right of the matrix; but notice that when you use this change of basis matrix to transform a basis, the basis appears to the left of the matrix. Same fundamental operation—matrix multiplication—but two different ways to use it depending on what you're doing.
  • I suspect what's tripping you up here is the difference between applying a linear transformation to a vector versus applying a change of basis. Both involve matrix multiplication, but the nature of the transformation is different.
  • First, some words about vectors. There are two ways to introduce students to the concept of vectors. The first is that a vector is just a pair or triple or $n$-tuple of scalars. Operations like vector addition and scalar multiplication are defined in terms of those scalars. The second way is to think of vectors as [abstract things](https://en.wikipedia.org/wiki/Vector_space#Definition) that obey axioms involving vector addition and scalar multiplication. For finite-dimensional vector spaces, everything that's true in one presentation is true in the other. However, the first presentation assumes some standard basis, which acts as a way to assign coordinates to vectors. Much like a distance of ‘5’ only makes sense if you know that it's implicitly ‘5 meters’ or ‘5 lightyears’, a vector $\left\[\begin{smallmatrix}2\\\\7\end{smallmatrix}\right\]$ only makes sense if you know (or assume that we all agree on) what vector corresponds to ‘1’ of each of those coordinates. That set of vectors is the basis. In the second presentation, a basis is just an arbitrary collection of vectors $\vec{e_i}$ that span the space (meaning that any vector is some linear combination of the basis vectors).
  • You can't use matrices to represent abstract vectors without implicitly using some basis. When we write that a vector $\vec{v} = \left\[\begin{smallmatrix}v_1\\\\v_2\end{smallmatrix}\right\]$, what we are actually saying is that $\vec{v} = v_1\vec{e_1} + v_2\vec{e_2}$, where $\vec{e_i}$ are the elements of our implicit basis. We can abuse notation a little bit and write this as $\vec{v} = \left\[\begin{smallmatrix}\vec{e_1} & \vec{e_2}\end{smallmatrix}\right\]\left\[\begin{smallmatrix}v_1\\\\v_2\end{smallmatrix}\right\]$. (All this can be made rigorous without too much more work, but I'm trying to keep this as simple as I can.)
  • Now suppose you want to represent $\vec{v}$ in another basis $\vec{b_i}$. $\vec{v}$ remains the same vector, so you want to find $w_i$ such that $\vec{v} = \left\[\begin{smallmatrix}\vec{e_1} & \vec{e_2}\end{smallmatrix}\right\]\left\[\begin{smallmatrix}v_1\\\\v_2\end{smallmatrix}\right\] = \left\[\begin{smallmatrix}\vec{b_1} & \vec{b_2}\end{smallmatrix}\right\]\left\[\begin{smallmatrix}w_1\\\\w_2\end{smallmatrix}\right\]$. If you know the $\vec{b_i}$ in terms of the $\vec{e_i}$, then you can expand them—if $\vec{b_i} = b_{i1}\vec{e_1} + b_{i2}\vec{e_2}$, then:
  • $$
  • \begin{align}
  • \vec{v} &= \begin{bmatrix}b_{11}\vec{e_1} + b_{12}\vec{e_2} & b_{21}\vec{e_1} + b_{22}\vec{e_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix} \\\\
  • &= \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}b_{11} & b_{21}\\\\b_{12} & b_{22}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}
  • \end{align}
  • $$
  • That middle matrix is the change of basis matrix. Notice that its columns are the basis vectors $\vec{b_i}$ when expressed in the standard basis. When you use a matrix to transform a vector, you multiply the vector to the right of the matrix; but notice that when you use this change of basis matrix to transform a basis, the basis appears to the left of the matrix. Same fundamental operation—matrix multiplication—but two different ways to use it depending on what you're doing.
#2: Post edited by user avatar r~~‭ · 2021-10-10T04:46:45Z (about 3 years ago)
  • I suspect what's tripping you up here is the difference between applying a linear transformation to a vector versus applying a change of basis. Both involve matrix multiplication, but the nature of the transformation is different.
  • You can't use matrices to represent abstract vectors without implicitly using some basis. When we write that a vector $\vec{v} = \begin{bmatrix}v_1\\\\v_2\end{bmatrix}$, what we are actually saying is that $\vec{v} = v_1\vec{e_1} + v_2\vec{e_2}$, where $\vec{e_i}$ are the elements of our implicit basis. We can abuse notation a little bit and write this as $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix}$. (All this can be made rigorous without too much more work, but I'm trying to keep this as simple as I can.)
  • Now suppose you want to represent $\vec{v}$ in another basis $\vec{b_i}$. $\vec{v}$ remains the same vector, so you want to find $w_i$ such that $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix} = \begin{bmatrix}\vec{b_1} & \vec{b_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$. If you know the $\vec{b_i}$ in terms of the $\vec{e_i}$, then you can expand them—if $\vec{b_i} = b_{i1}\vec{e_1} + b_{i2}\vec{e_2}$, then $\vec{v} = \begin{bmatrix}b_{11}\vec{e_1} + b_{12}\vec{e_2} & b_{21}\vec{e_1} + b_{22}\vec{e_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}b_{11} & b_{21}\\\\b_{12} & b_{22}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$
  • That middle matrix is the change of basis matrix. Notice that its columns are the basis vectors of $\vec{b_i}$ when expressed in the standard basis. When you use a matrix to transform a vector, you multiply the vector to the right of the matrix; but notice that when you use this change of basis matrix to transform a basis, the basis appears to the left of the matrix. Same fundamental operation—matrix multiplication—but two different ways to use it depending on what you're doing.
  • I suspect what's tripping you up here is the difference between applying a linear transformation to a vector versus applying a change of basis. Both involve matrix multiplication, but the nature of the transformation is different.
  • You can't use matrices to represent abstract vectors without implicitly using some basis. When we write that a vector $\vec{v} = \begin{bmatrix}v_1\\\\v_2\end{bmatrix}$, what we are actually saying is that $\vec{v} = v_1\vec{e_1} + v_2\vec{e_2}$, where $\vec{e_i}$ are the elements of our implicit basis. We can abuse notation a little bit and write this as $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix}$. (All this can be made rigorous without too much more work, but I'm trying to keep this as simple as I can.)
  • Now suppose you want to represent $\vec{v}$ in another basis $\vec{b_i}$. $\vec{v}$ remains the same vector, so you want to find $w_i$ such that $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix} = \begin{bmatrix}\vec{b_1} & \vec{b_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$. If you know the $\vec{b_i}$ in terms of the $\vec{e_i}$, then you can expand them—if $\vec{b_i} = b_{i1}\vec{e_1} + b_{i2}\vec{e_2}$, then:
  • $$
  • \begin{align}
  • \vec{v} &= \begin{bmatrix}b_{11}\vec{e_1} + b_{12}\vec{e_2} & b_{21}\vec{e_1} + b_{22}\vec{e_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix} \\\\
  • &= \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}b_{11} & b_{21}\\\\b_{12} & b_{22}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}
  • \end{align}
  • $$
  • That middle matrix is the change of basis matrix. Notice that its columns are the basis vectors $\vec{b_i}$ when expressed in the standard basis. When you use a matrix to transform a vector, you multiply the vector to the right of the matrix; but notice that when you use this change of basis matrix to transform a basis, the basis appears to the left of the matrix. Same fundamental operation—matrix multiplication—but two different ways to use it depending on what you're doing.
#1: Initial revision by user avatar r~~‭ · 2021-10-09T19:16:12Z (about 3 years ago)
I suspect what's tripping you up here is the difference between applying a linear transformation to a vector versus applying a change of basis. Both involve matrix multiplication, but the nature of the transformation is different.

You can't use matrices to represent abstract vectors without implicitly using some basis. When we write that a vector $\vec{v} = \begin{bmatrix}v_1\\\\v_2\end{bmatrix}$, what we are actually saying is that $\vec{v} = v_1\vec{e_1} + v_2\vec{e_2}$, where $\vec{e_i}$ are the elements of our implicit basis. We can abuse notation a little bit and write this as $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix}$. (All this can be made rigorous without too much more work, but I'm trying to keep this as simple as I can.)

Now suppose you want to represent $\vec{v}$ in another basis $\vec{b_i}$. $\vec{v}$ remains the same vector, so you want to find $w_i$ such that $\vec{v} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}v_1\\\\v_2\end{bmatrix} = \begin{bmatrix}\vec{b_1} & \vec{b_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$. If you know the $\vec{b_i}$ in terms of the $\vec{e_i}$, then you can expand them—if $\vec{b_i} = b_{i1}\vec{e_1} + b_{i2}\vec{e_2}$, then $\vec{v} = \begin{bmatrix}b_{11}\vec{e_1} + b_{12}\vec{e_2} & b_{21}\vec{e_1} + b_{22}\vec{e_2}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix} = \begin{bmatrix}\vec{e_1} & \vec{e_2}\end{bmatrix}\begin{bmatrix}b_{11} & b_{21}\\\\b_{12} & b_{22}\end{bmatrix}\begin{bmatrix}w_1\\\\w_2\end{bmatrix}$

That middle matrix is the change of basis matrix. Notice that its columns are the basis vectors of $\vec{b_i}$ when expressed in the standard basis. When you use a matrix to transform a vector, you multiply the vector to the right of the matrix; but notice that when you use this change of basis matrix to transform a basis, the basis appears to the left of the matrix. Same fundamental operation—matrix multiplication—but two different ways to use it depending on what you're doing.