This answer is based on self-reflection, not research literature, so take it for what it is worth.
Generally speaking, to understand an expression, I would prefer most mathematical objects there to be "simple": positive rather than negative, integer if possible, rational rather than a root expression (though pi and e are pretty nice).
Furthermore, generally speaking, I would prefer any "non-simple" entities to be in as "simple" contexts as possible.
What "simple" amounts to is that it requires less cognitive processing to figure out how it behaves and how big it is and so on.
Now, root of three is about 1,7. If I see the expression $1/\sqrt 3$, to figure out an approximate value for it, I have to do division by a non-integer. And that is hard. On the other hand, if I see $\sqrt{3}/3$, it amounts to the much easier expression of 1,7/3.
I think this same applies, more-or-less, to other cognitive processing besides getting an idea for the size of some expression. It is simpler to work when the complicated expressions are in simple places and much harder to work with them when they are in complicated places.