Since all the $x_i$ are restricted to be non-negative, the magnitude of the vector increases if and only if the sum of the $x_i$ increases. Therefore, maximizing $\sum_i x_i$ is equivalent to maximizing $|x|$. This sum is a linear function.
Your direction constraint can be expressed as: $$\vec y \wedge T(\vec x) = 0\quad\text{and}\quad\vec y \cdot T(\vec x) \geq 0$$ $\wedge$ represents the [outer (or exterior or wedge) product](https://en.wikipedia.org/wiki/Exterior_algebra). These are both linear (in)equalities as are your constraints on $x_i$ so the whole problem is a [linear programming](https://en.wikipedia.org/wiki/Linear_programming) problem. There are very well-developed tools and theories for solving these.
If you have similar problems, you may also want to look at other convex optimization forms such as convex cases of [quadratic programming](https://en.wikipedia.org/wiki/Quadratic_programming) or [second-order cone program](https://en.wikipedia.org/wiki/Second-order_cone_programming) (SOCP). If we hadn't been able to reduce the magnitude to a sum, the problem would still fall into a convex case of quadratic programming and into SOCP. In general, convex optimization is much easier and reasonably efficient to perform as compared general optimization. Unlike many areas, the dividing line between easy/efficient and hard/slow in optimization is between convex and non-convex cases rather than linear versus non-linear. (That said, linear is still nice.)