This answer combines the thoughts in both of the other (current) answers (tommi's and r~~'s) but presents it in a more formal context.
A significant portion of high school (and earlier... and a decent amount of later...) math is highly algorithmic, and, in particular, corresponds to the important idea of finding a [*normal form*](https://en.wikipedia.org/wiki/Normal_form_(abstract_rewriting)). Given a collection of [rewrite rules](https://en.wikipedia.org/wiki/Abstract_rewriting_system), often produced by orienting equations, a **normal form** is a term to which none of the rewrite rules applies.
There are various properties we desire of a normal form with respect to a given equivalence relation, call it $\sim$, on syntactic terms. In most applications, this will be a semantic equivalence. Write $N(t)$ for the normal form of the term $t$. Then we want, $t \sim t' \implies N(t)=N(t')$, $t \sim N(t)$, and $N(N(t)) = N(t)$. This definition is in terms of an abstract normalization procedure. If we have a term rewriting system which I'll write as $t \leadsto t'$ for "$t$ rewrites to $t'$", then the following properties will induce an abstract normalization procedure. We need [confluence](https://en.wikipedia.org/wiki/Confluence_(abstract_rewriting)) and [termination](https://en.wikipedia.org/wiki/Termination_(term_rewriting)) to even get a function $N$. Confluence ensures a unique normal form and termination ensures that a normal form exists. Next, $t \leadsto t' \implies t \sim t'$ ensures that equivalence is preserved by rewrites. With this we only need to show $t \sim t' \implies N(t) = N(t')$.
One of the main benefits of normal forms is, indeed, as r~~ suggests, that it simplifies comparing answers. In particular, since we often perform term rewriting on syntactic terms where the notion of equality is simply that they are the same syntactic expression, i.e. that the "look the same", normal forms reduce a potentially difficult equivalence relation to a trivial one. This captures the reduction in "cognitive processing" that tommi references. More generally, the existence of normal forms allows us to identify the elements of the semantic domain with the normal forms. That is, it gives us a tight syntactic handle on elements of the semantic domain. We can then learn properties about the semantic domain by studying the properties of the usually much more tractable normal forms.
In practice, we'll often loosen this a bit by requiring only that $t \sim t' \implies N(t) \approx N(t')$ where $\approx$ is a refinement of $\sim$, i.e. $t \approx t' \implies t \sim t'$ but not necessarily the other way. For example, we often don't care about ordering, e.g. the monomials of a polynomial or the atomic terms in disjunctive/conjunctive normal form. It will usually be the case that checking $\approx$ is still significantly easier than checking $\sim$. Several *different* aspects of automated theorem provers involve computing normal forms.
The notion of normal form is powerful and interesting theoretically as well as practically. [Word problems in group theory](https://en.wikipedia.org/wiki/Word_problem_for_groups) (and elsewhere), [cut elimination](https://en.wikipedia.org/wiki/Cut-elimination_theorem) in logic, and [normalization in type theory and lambda calculus](https://en.wikipedia.org/wiki/Normalization_property_(abstract_rewriting)) are theoretically important examples.
While, in my experience, that these normal forms exists, e.g. for polynomial expressions, let alone can be arrived at mechanically is not usually pointed out in early (or later...) education, many "simplifications" can indeed be understood as moving towards a normal form and often "simplified forms" are normal forms in a fairly natural sense, e.g. reduced forms for rationals. Many common representations are also implicitly motivated/justified by the existence of normal forms. In particular, representing a (finite dimensional) vector as a tuple of coefficients with respect to a basis follows in part from the fact that every vector expression can be reduced to a linear combination. Rationalizing the denominator is part of the fact that elements of the field extension $\mathbb Q(\sqrt 3)$, say, can be represented as vectors with basis $1$ and $\sqrt 3$. Rationalizing the denominator is a rewrite that moves you closer to the normal form $a + b\sqrt 3$.
Of course, there can be different notions of normal form for the same type of expressions and non-normal forms can more clearly present the structure of the expression. Mathematicians are likely to prefer these expressions to normal form expressions. By construction, non-normal form expressions are semantically equivalent to their normal forms, so each is as good as the other as far as semantics is concerned.