Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Comments on Maximize Independent Variable of Matrix Multiplication

Parent

Maximize Independent Variable of Matrix Multiplication

+3
−0

Let $T$ be an $m\times n$ matrix with column vectors $\vec{T_i}$:

$$ \vec{T_i}=\begin{bmatrix} \vec{T_1} & ... & \vec{T_n} \end{bmatrix} $$

Let $\vec{x}$ be an unknown $n$-element vector:

$$ \vec{x}=\begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} $$

Suppose the following equation holds for a known $m$-element vector $\vec{y}$:

$$ \frac{T\vec{x}}{\left|\left|T\vec{x}\right|\right|}= \frac{\vec{y}}{\left|\left|\vec{y}\right|\right|} $$

That is, $T\vec{x}$ and $\vec{y}$ have the same direction.

If each component $x_i$ of $\vec{x}$ must satisfy the condition $0\le x_i\le1$, how does one maximize the magnitude of $\vec{x}$?

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.
Why should this post be closed?

1 comment thread

Dimensions (2 comments)
Post
+3
−0

Since all the $x_i$ are restricted to be non-negative, the magnitude of the vector increases if and only if the sum of the $x_i$ increases. Therefore, maximizing $\sum_i x_i$ is equivalent to maximizing $|x|$. This sum is a linear function.

Your direction constraint can be expressed as: $$\vec y \wedge T(\vec x) = 0\quad\text{and}\quad\vec y \cdot T(\vec x) \geq 0$$ $\wedge$ represents the outer (or exterior or wedge) product. These are both linear (in)equalities as are your constraints on $x_i$ so the whole problem is a linear programming problem. There are very well-developed tools and theories for solving these.

If you have similar problems, you may also want to look at other convex optimization forms such as convex cases of quadratic programming or second-order cone program (SOCP). If we hadn't been able to reduce the magnitude to a sum, the problem would still fall into a convex case of quadratic programming and into SOCP. In general, convex optimization is much easier and reasonably efficient to perform as compared general optimization. Unlike many areas, the dividing line between easy/efficient and hard/slow in optimization is between convex and non-convex cases rather than linear versus non-linear. (That said, linear is still nice.)

History
Why does this post require attention from curators or moderators?
You might want to add some details to your flag.

2 comment threads

Linearity of Directional Constraint (2 comments)
Number of Linear Constraints (2 comments)
Linearity of Directional Constraint
Josh Hyatt‭ wrote over 2 years ago

How is the constraint $T\vec{x}\wedge\vec{y}=0$ converted into a set of linear constraints? For $m=2$, one can use the general form of the area $A$ of a parallelogram defined by two vectors $\vec{u}$ and $\vec{v}$: $A=\sqrt{|\vec{u}|^2|\vec{v}|^2-(\vec{u}\cdot\vec{v})^2}$. Substituting $A=0$, this quickly simplifies to $0=u_xv_y-u_yv_x$, which is indeed easily representable in our LP problem. However, it's unclear to me how this extends to higher dimensions. What is a generalized procedure for finding a minimal set of linear constraints that satisfy $T\vec{x}\wedge\vec{y}=0$?

Derek Elkins‭ wrote over 2 years ago · edited over 2 years ago

The same way you'd convert any linear function between finite dimensional spaces to a matrix. Choose a basis for the input, i.e. $\vec x$, and a basis for the output, which will be a basis of bivectors, and evaluate the linear function for each basis vector and expand the result in the output basis. This will in a completely mechanical manner give you a matrix. No non-linear operation is involved. The constraints will then be $M\vec x=0$. It's tedious to typeset out the components of $M$ but computing them is a completely mechanical and boring task. If $T:V\to W$ then the expression can be viewed as a linear function $V\to W\wedge W$. If $\mathbf e_i$ is the basis for $V$ and $\mathbf e'_j$ the basis for $W$, then a basis for $W\wedge W$ is $\mathbf e'_j\wedge\mathbf e'_k$ for $j<k$. Then simply distribute $T(\mathbf e_i)\wedge(\sum y_j \mathbf e'_j)$ regrouping into the basis using the antisymmetry of $\wedge$. The coefficients will be the components $i$th column of the matrix.