Q&A

Why always rationalize a denominator?

+6
−0

Schoolteachers will insist that their students present answers to problems with rational (indeed integral) denominators. Never $1/\sqrt3$, for example, but instead $\sqrt3/3$. That's also how math textbooks present answers. I understand why it's important to learn how to rationalize a denominator, why we sometimes want denominators rational. But why should students put every single answer in those terms? What's wrong with presenting a number as $1/\sqrt3$?

Why does this post require moderator attention?
Why should this post be closed?

You are accessing this answer with a direct link, so it's being shown above all other answers regardless of its score. You can return to the normal view.

+5
−0

Mathematically, absolutely nothing is wrong with fractions that don't have the smallest positive integer denominator that they could have, as far as I know.

Pedagogically, the policy does help with there being a unique answer that can be checked quickly, when grading multiple homework assignments or tests. I imagine that property can also be useful at times outside of the classroom, though of course any working mathematician would be comfortable with the idea of there being multiple equivalent representations of a mathematical object; I don't think anyone outside of an educational context is likely to bat an eye at $1/\sqrt{3}$ in practice.

Why does this post require moderator attention?

+5
−0

This answer combines the thoughts in both of the other (current) answers (tommi's and r~~'s) but presents it in a more formal context.

A significant portion of high school (and earlier... and a decent amount of later...) math is highly algorithmic, and, in particular, corresponds to the important idea of finding a normal form. Given a collection of rewrite rules, often produced by orienting equations, a normal form is a term to which none of the rewrite rules applies.

There are various properties we desire of a normal form with respect to a given equivalence relation, call it $\sim$, on syntactic terms. In most applications, this will be a semantic equivalence. Write $N(t)$ for the normal form of the term $t$. Then we want, $t \sim t' \implies N(t)=N(t')$, $t \sim N(t)$, and $N(N(t)) = N(t)$. This definition is in terms of an abstract normalization procedure. If we have a term rewriting system which I'll write as $t \leadsto t'$ for "$t$ rewrites to $t'$", then the following properties will induce an abstract normalization procedure. We need confluence and termination to even get a function $N$. Confluence ensures a unique normal form and termination ensures that a normal form exists. Next, $t \leadsto t' \implies t \sim t'$ ensures that equivalence is preserved by rewrites. With this we only need to show $t \sim t' \implies N(t) = N(t')$.

One of the main benefits of normal forms is, indeed, as r~~ suggests, that it simplifies comparing answers. In particular, since we often perform term rewriting on syntactic terms where the notion of equality is simply that they are the same syntactic expression, i.e. that the "look the same", normal forms reduce a potentially difficult equivalence relation to a trivial one. This captures the reduction in "cognitive processing" that tommi references. More generally, the existence of normal forms allows us to identify the elements of the semantic domain with the normal forms. That is, it gives us a tight syntactic handle on elements of the semantic domain. We can then learn properties about the semantic domain by studying the properties of the usually much more tractable normal forms.

In practice, we'll often loosen this a bit by requiring only that $t \sim t' \implies N(t) \approx N(t')$ where $\approx$ is a refinement of $\sim$, i.e. $t \approx t' \implies t \sim t'$ but not necessarily the other way. For example, we often don't care about ordering, e.g. the monomials of a polynomial or the atomic terms in disjunctive/conjunctive normal form. It will usually be the case that checking $\approx$ is still significantly easier than checking $\sim$. Several different aspects of automated theorem provers involve computing normal forms.

The notion of normal form is powerful and interesting theoretically as well as practically. Word problems in group theory (and elsewhere), cut elimination in logic, and normalization in type theory and lambda calculus are theoretically important examples.

While, in my experience, that these normal forms exists, e.g. for polynomial expressions, let alone can be arrived at mechanically is not usually pointed out in early (or later...) education, many "simplifications" can indeed be understood as moving towards a normal form and often "simplified forms" are normal forms in a fairly natural sense, e.g. reduced forms for rationals. Many common representations are also implicitly motivated/justified by the existence of normal forms. In particular, representing a (finite dimensional) vector as a tuple of coefficients with respect to a basis follows in part from the fact that every vector expression can be reduced to a linear combination. Rationalizing the denominator is part of the fact that elements of the field extension $\mathbb Q(\sqrt 3)$, say, can be represented as vectors with basis $1$ and $\sqrt 3$. Rationalizing the denominator is a rewrite that moves you closer to the normal form $a + b\sqrt 3$.

Of course, there can be different notions of normal form for the same type of expressions and non-normal forms can more clearly present the structure of the expression. Mathematicians are likely to prefer these expressions to normal form expressions. By construction, non-normal form expressions are semantically equivalent to their normal forms, so each is as good as the other as far as semantics is concerned.

Why does this post require moderator attention?

+2
−0

This answer is based on self-reflection, not research literature, so take it for what it is worth.

Generally speaking, to understand an expression, I would prefer most mathematical objects there to be "simple": positive rather than negative, integer if possible, rational rather than a root expression (though pi and e are pretty nice).

Furthermore, generally speaking, I would prefer any "non-simple" entities to be in as "simple" contexts as possible.

What "simple" amounts to is that it requires less cognitive processing to figure out how it behaves and how big it is and so on.

Now, root of three is about 1,7. If I see the expression $1/\sqrt 3$, to figure out an approximate value for it, I have to do division by a non-integer. And that is hard. On the other hand, if I see $\sqrt{3}/3$, it amounts to the much easier expression of 1,7/3.

I think this same applies, more-or-less, to other cognitive processing besides getting an idea for the size of some expression. It is simpler to work when the complicated expressions are in simple places and much harder to work with them when they are in complicated places.

Why does this post require moderator attention?