Comments on Maximize Independent Variable of Matrix Multiplication
Parent
Maximize Independent Variable of Matrix Multiplication
Let $T$ be an $m\times n$ matrix with column vectors $\vec{T_i}$:
$$ \vec{T_i}=\begin{bmatrix} \vec{T_1} & ... & \vec{T_n} \end{bmatrix} $$
Let $\vec{x}$ be an unknown $n$-element vector:
$$ \vec{x}=\begin{bmatrix} x_1 \\ \vdots \\ x_n \end{bmatrix} $$
Suppose the following equation holds for a known $m$-element vector $\vec{y}$:
$$ \frac{T\vec{x}}{\left|\left|T\vec{x}\right|\right|}= \frac{\vec{y}}{\left|\left|\vec{y}\right|\right|} $$
That is, $T\vec{x}$ and $\vec{y}$ have the same direction.
If each component $x_i$ of $\vec{x}$ must satisfy the condition $0\le x_i\le1$, how does one maximize the magnitude of $\vec{x}$?
Post
Since all the $x_i$ are restricted to be non-negative, the magnitude of the vector increases if and only if the sum of the $x_i$ increases. Therefore, maximizing $\sum_i x_i$ is equivalent to maximizing $|x|$. This sum is a linear function.
Your direction constraint can be expressed as: $$\vec y \wedge T(\vec x) = 0\quad\text{and}\quad\vec y \cdot T(\vec x) \geq 0$$ $\wedge$ represents the outer (or exterior or wedge) product. These are both linear (in)equalities as are your constraints on $x_i$ so the whole problem is a linear programming problem. There are very well-developed tools and theories for solving these.
If you have similar problems, you may also want to look at other convex optimization forms such as convex cases of quadratic programming or second-order cone program (SOCP). If we hadn't been able to reduce the magnitude to a sum, the problem would still fall into a convex case of quadratic programming and into SOCP. In general, convex optimization is much easier and reasonably efficient to perform as compared general optimization. Unlike many areas, the dividing line between easy/efficient and hard/slow in optimization is between convex and non-convex cases rather than linear versus non-linear. (That said, linear is still nice.)
1 comment thread