Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

66%
+2 −0
#1: Initial revision by user avatar Derek Elkins‭ · 2024-06-20T04:18:36Z (5 months ago)
Not asserting an axiom only broadens the class of possible models. Unless you're adding an axiom that states that the type theory is definitively *not* proof irrelevant, any proof irrelevant model is still a model. Whether or not proof irrelevance is assumed would only matter if it *excluded* some model, or if you wanted to know what could be *proved* in the *theory* as opposed to what is true in some model.

So the answer to your first question is: if you can do the former then you can do the latter, trivially. The simple-minded semantics you give works and validates proof-irrelevance, impredicativity, and the law of excluded middle (assuming all the other relevant operations are interpreted in the "usual" set-theoretic ways). It will still be a model if you drop these assumptions, since dropping an axiom can't cause a model to be excluded. While I believe this technically answers your question, I feel it paints a misleadingly simple picture. Before I continue, I will say that if you just want to think of type theory as a notation for manipulating traditional sets, then that's fine for many type theories (but, notably, not HoTT). Such a view would make many aspects of dependent type theory seem mysterious or unnecessary or unnecessarily cumbersome, but it would work.

Also, to address the comment about the slides, it's not making a general claim, and it is explicitly being handwavy. It's probably easiest and most natural to assume that bijective sets should be treated as equivalent in this case. Then what it is saying is that we have a (**ZF**) *set* $P$ of cardinals such that for *any* set $A$ and set function $B : A \to P$, $|\prod_{a\in A}B(a)| \in P$. If $X \in P$ and $|X| > 1$ then we'd have $|X^P|\in P$ which would lead to a violation of the axiom of foundation. Even more naturally, arguably, is to not quotient into cardinals and view **Prop** as a [groupoid](https://en.wikipedia.org/wiki/Groupoid#Category_theoretic) of sets and bijections and "is" means equivalent as categories.

Regardless, the point of this statement in the slides is to justify why propositional extensionality, proof irrelevance, and law of excluded middle are reasonable under this set-theoretic interpretation.

----

As a preliminary, let's re-examine the semantics of classical first-order logic (FOL). Typically, the semantics of classical FOL are presented by interpreting a formula with at most $n$ free variables into a $n$-ary relation on a given domain set $D$. This is all well and good and is a sound and complete semantics, but it arguably not the *natural* range of semantics for classical FOL. It would make more sense to assign to each context of variables (which, in the single-sorted case, we can identify with their lengths) a [boolean algebra](https://en.wikipedia.org/wiki/Boolean_algebra_(structure)). The traditional semantics would then be $n \mapsto 2^{D^n}$. There are many other choices one could make though, e.g. $n \mapsto B^{D^n}$ for any boolean algebra $B$. There are [certain laws](https://ncatlab.org/nlab/show/Boolean+hyperdoctrine) this mapping from contexts to boolean algebras must satisfy to serve as a sound place to interpret classical FOL, but, suffice it to say, there are still many more options than the traditional choice. The natural notion of validity would then be with respect to all of these possible mappings. It just so happens that it suffices to consider the $n \mapsto 2^{D^n}$ case.

The reason I bring this up is even in "traditional" logic and set theory, there's nothing saying that the set of "propositions" has to be a two element set. For classical logic, it needs to be a boolean algebra, but there are many boolean algebras with more than two elements, e.g. $2^k$ for any natural $k>1$.

Moving beyond FOL, a broad but not complete class of models for many dependent type theories is the [toposes](https://ncatlab.org/nlab/show/topos) (with NNO). What a topos is exactly isn't very important for this discussion, though you can follow the link to find out more. For our purposes, what's most interesting about a topos is that it has an object of propositions (an analogue to **Prop**), $\Omega$. Like the previous discussion, we can talk about boolean toposes where $\Omega$ is (internally) a boolean algebra. In fact, we even have $\Omega \cong 1 + 1$. Nevertheless, it's not the case that $\Omega$ is necessarily two-valued either internally or externally. The category of (classical) sets, $\mathbf{Set}$, is a boolean topos, but so is $\mathbf{Set}\times\mathbf{Set}$ where the objects are just pairs of sets and everything happens component-wise. The top and bottom elements of the $\Omega$ for $\mathbf{Set}\times\mathbf{Set}$ are $(\top,\top)$ and $(\bot,\bot)$ as you'd expect, but we also have $(\top,\bot)$ and $(\bot,\top)$. So, externally, we see that the interpretation of $\Omega$ in this model is a four element set, and even internally we have four global elements of $\Omega$. I describe a non-boolean topos that has two global elements of $\Omega$ (though externally 3 elements) [here](https://math.stackexchange.com/a/1682622).

Another aspect these examples subtly allude to is that it's not enough to say *what* "propositions" are, you have to how they are used. That is, how the rest of the type theory is interpreted matters as well. For example, there are certainly non-boolean toposes where $|\Omega|=4$. More pertinently, if we have a model where our types are interpreted as groupoids and terms give rise to functors, then we could have two groupoids that are wildly different as sets, but are nevertheless indistinguishable by the model of the type theory.

Ultimately, traditional (classical) set-theoretic semantics is not adequate to analyze many type theories. Categorical semantics has significantly broadened and, in my opinion, significantly clarified semantics, even for traditional logics. However, it also allows much more "weird" and subtle things to happen, and many simplifications and conflations of traditional set-theoretic model theory no longer apply. My point with this latter part is not just that you can find strange semantics where weird things happen, but that "naive" set-theoretic semantics are often somewhat degenerate. In an analogy that's more than an analogy, it would be like studying topology but then only considering discrete spaces. You end up with a lot of machinery that isn't doing anything, and a whole world of options that you're not exploring.