This is largely just a notational difference insofar as it's used in the example you give. While it often is convenient and enlightening to view a matrix as a "vector" of vectors momentarily, taking this too seriously can be problematic. A vector space expects a [field](https://en.wikipedia.org/wiki/Field_(mathematics)) of scalars, and vectors don't form a field. Even generalizations like [(semi)modules](https://en.wikipedia.org/wiki/Module_(mathematics)) still expect a [(semi)ring](https://en.wikipedia.org/wiki/Ring_(mathematics)) of scalars which vectors are also not. The impact of this would be a "theory of vectrixes" could not rely (and presumably would not support) many of the properties/theorems you'd expect from linear algebra. In other words, the analogy to traditional linear algebra would break down.
In terms of numeric computing, a vector is usually represented as a dense array of numbers. A matrix is most commonly laid out as a dense array of numbers as well in row major order, i.e. all the entries from the first row followed by all the entries from the second row and so forth. Thus a matrix and a vectrix would be represented identically in computer memory. Similarly, the vectrix operations correspond to standard linear algebra operations. All told, as far as computing is concerned using usual data representations, whether you want to consider a matrix as a 2D array of scalars or a vector of vectors is just a matter of how you interpret it.
On both theoretical and practical fronts I'd recommend looking into geometric algebra. On the theoretical side, geometric algebra is a more standard and well-behaved framework, and is more general and far more conceptually clear than vectrixes. For example, the $w^\times$ operation that seems solely motivated to produce something that will lead to the cross product corresponds to a duality in geometric algebra $wI$ which is just multiplication by the unit pseudoscalar. The general GA identity $(u \cdot (vI)) = (u \wedge v)I$ explains what's going on when combined with the 3D specific identification of $u\times v = (u \wedge v)I$. The wedge product is, however, far better behaved, more general, and more geometrically sensible than the cross product.
Computationally, if you only care about rotations between frames, then you can represent these as rotors corresponding to the even subalgebra of the geometric algebra. For 3D, this *exactly* corresponds to quaternions. Representation of rotations in terms of quaternions has many benefits over matrix representations. Composing and apply quaternions take fewer operations to perform. Quaternions have less redundancy leading to less memory to represent a quaternion and less ways for things to go wrong, which leads to simpler code. For example, a $3\times 3$ matrix or a triple of vectors intended to represent a frame can fail to be orthogonal, a quaternion cannot. There are also some off-the-shelf implementations of geometric algebra and *tons* of off-the-shelf implementations of quaternions. As a note, even if you are only interested in quaternions, I'd still recommend understanding them from the perspective of geometric algebra which makes what they are far more clear. It also embeds them in a larger context which is often helpful, e.g. quaternions can't represent reflections but reflections have a quite nice representation in GA that has very nice relationships to quaternions.