Communities

Writing
Writing
Codidact Meta
Codidact Meta
The Great Outdoors
The Great Outdoors
Photography & Video
Photography & Video
Scientific Speculation
Scientific Speculation
Cooking
Cooking
Electrical Engineering
Electrical Engineering
Judaism
Judaism
Languages & Linguistics
Languages & Linguistics
Software Development
Software Development
Mathematics
Mathematics
Christianity
Christianity
Code Golf
Code Golf
Music
Music
Physics
Physics
Linux Systems
Linux Systems
Power Users
Power Users
Tabletop RPGs
Tabletop RPGs
Community Proposals
Community Proposals
tag:snake search within a tag
answers:0 unanswered questions
user:xxxx search by author id
score:0.5 posts with 0.5+ score
"snake oil" exact phrase
votes:4 posts with 4+ votes
created:<1w created < 1 week ago
post_type:xxxx type of post
Search help
Notifications
Mark all as read See all your notifications »
Q&A

Post History

66%
+2 −0
Q&A The purpose of logical frameworks in specifying type theories

1 answer  ·  posted 1y ago by user205‭  ·  last activity 1y ago by Derek Elkins‭

Question type-theory logic
#3: Post edited by user avatar user205‭ · 2023-03-26T20:23:45Z (over 1 year ago)
  • I'm trying to understand the notion of a logical framework and how/why/when it's used to define type theories. I'm looking at Luo's "Computation and Reasoning" (1994), where he considers "LF, a typed version of Martin-Lof's logical framework".
  • >LF is a simple type system with terms of the following forms:
  • >$$\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k),$$
  • >where the free occurences of variable $x$ in $K'$ and $k'$ are bound by the binding operators $(x:K)$ and $[x:K]$, respectively.
  • (Note:$[x:K]b$ means $\lambda x:K. b$ and $(x:K)K'$ means $\Pi x:K.K'$.)
  • > There are five forms of judgements in LF:
  • >
  • > [![enter image description here][1]][1]
  • And then he gives rules for infering judgements in LF ([one][2] and [two][3]).
  • > In general, a specification of a type theory will consist of a collection of declarations of new constants and a collection of computation rules (usually about the new constants).
  • For example, sigma types can be specified by declaring the following constants:
  • [![enter image description here][4]][4]
  • ----
  • I have several questions about this set up, but the most basic (and general) one is this: suppose somebody wants to define a type theory with sigma types. What are the benefits of invoking LF to define it?
  • As far as I understand, there's a more "direct" way to define it -- for example, as in Appendix A2 of the HoTT book, where sigma types are introduced by giving rules of formation, introduction, elimination, computation (as opposed to introducing several "constants").
  • Also, in Lungu's PhD thesis "Subtyping in signatures" written under Luo, she uses Luo's LF (defined above and introduces constants for product types ([pic][6]), as well as inference rules for product types ([pic][7]). This makes me even more confused -- if, when using a logical framework to specify a particular type theory, we still need to provide inference rules even for things that are encoded in the logical framework as basic terms (I'm talking about dependent product types and lambda terms - they are basic terms in LF by the first definition above), why do we need the logical framework at all? It looks like it's some extraneous mechanism that makes specifying type theories more complicated (although I'm sure it's the opposite, I just don't understand why).
  • Further, if why does one only take dependent product types and lambda terms as basic terms in LF, why not include sigma types as well as their inhabitants as basic terms in LF? I suppose the reason is that most dependent type theories have product types but not necessarily sigma types, but if we were to specify a type theory with sigma types, would it be reasonable to consider a logical framework which is obtained from the framework mentioned at the beginning by adding sigma types and their inhabitants as terms?
  • [1]: https://i.stack.imgur.com/N7tOU.png
  • [2]: https://i.stack.imgur.com/CpaoL.png
  • [3]: https://i.stack.imgur.com/NHhx0.png
  • [4]: https://i.stack.imgur.com/mfyny.png
  • [5]: https://math.stackexchange.com/a/4638042/1048887
  • [6]: https://i.stack.imgur.com/q2VZx.png
  • [7]: https://i.stack.imgur.com/jKfSV.png
  • I'm trying to understand the notion of a logical framework and how/why/when it's used to define type theories. I'm looking at Luo's "Computation and Reasoning" (1994), where he considers "LF, a typed version of Martin-Lof's logical framework".
  • >LF is a simple type system with terms of the following forms:
  • >$$\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k),$$
  • >where the free occurences of variable $x$ in $K'$ and $k'$ are bound by the binding operators $(x:K)$ and $[x:K]$, respectively.
  • (Note:$[x:K]b$ means $\lambda x:K. b$ and $(x:K)K'$ means $\Pi x:K.K'$.)
  • > There are five forms of judgements in LF:
  • >
  • > [![enter image description here][1]][1]
  • And then he gives rules for infering judgements in LF ([one][2] and [two][3]).
  • > In general, a specification of a type theory will consist of a collection of declarations of new constants and a collection of computation rules (usually about the new constants).
  • For example, sigma types can be specified by declaring the following constants:
  • [![enter image description here][4]][4]
  • ----
  • I have several questions about this set up, but the most basic (and general) one is this: suppose somebody wants to define a type theory with sigma types. What are the benefits of invoking LF to define it?
  • As far as I understand, there's a more "direct" way to define it -- for example, as in Appendix A2 of the HoTT book, where sigma types are introduced by giving rules of formation, introduction, elimination, computation (as opposed to introducing several "constants").
  • Also, in Lungu's PhD thesis "Subtyping in signatures" written under Luo, she uses Luo's LF (defined above and introduces constants for product types ([pic][6]), as well as inference rules for product types ([pic][7]). This makes me even more confused -- if, when using a logical framework to specify a particular type theory, we still need to provide inference rules even for things that are encoded in the logical framework as basic terms (I'm talking about dependent product types and lambda terms - they are basic terms in LF by the first definition above), why do we need the logical framework at all? It looks like it's some extraneous mechanism that makes specifying type theories more complicated (although I'm sure it's the opposite, I just don't understand why).
  • Further, if why does one only take dependent product types and lambda terms as basic terms in LF, why not include sigma types as well as their inhabitants as basic terms in LF? I suppose the reason is that most dependent type theories have product types but not necessarily sigma types, but if we were to specify a type theory with sigma types, would it be reasonable to consider a logical framework which is obtained from the framework mentioned at the beginning by adding sigma types and their inhabitants as terms?
  • ---
  • Edit: I got an answer on another Q&A platform, and here's my current understanding of this matter. Correct me if I'm wrong but I think the idea is that if we have [these rules](https://i.stack.imgur.com/NHhx0.png) specified in LF and if we introduce Pi and Lambda by declaring [these constants](https://i.stack.imgur.com/q2VZx.png), then we don't need to postulate [these rules](https://i.stack.imgur.com/jKfSV.png) for Pi and Lambda separately because they follow automatically from the rules for dependent products that are part of the definition of LF. And similarly for sigma types: if we want to introduce Sigma in the object theory and if we do this by declaring [these constants](https://i.stack.imgur.com/mfyny.png), then the [standard inference rules for sigma types](https://i.stack.imgur.com/LewvK.png) will automatically follow from the rules for dependent types in LF. And if we want to introduce some new constant in the object theory, its type must have one of the following forms: $\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k)$, and then all desirable inference rules for the newly introduced constant will follow from the dependent type rules in LF.
  • [1]: https://i.stack.imgur.com/N7tOU.png
  • [2]: https://i.stack.imgur.com/CpaoL.png
  • [3]: https://i.stack.imgur.com/NHhx0.png
  • [4]: https://i.stack.imgur.com/mfyny.png
  • [5]: https://math.stackexchange.com/a/4638042/1048887
  • [6]: https://i.stack.imgur.com/q2VZx.png
  • [7]: https://i.stack.imgur.com/jKfSV.png
#2: Post edited by user avatar user205‭ · 2023-03-26T14:50:38Z (over 1 year ago)
  • I'm trying to understand the notion of a logical framework and how/why/when it's used to define type theories. I'm looking at Luo's "Computation and Reasoning" (1994), where he considers "LF, a typed version of Martin-Lof's logical framework".
  • >LF is a simple type system with terms of the following forms:
  • >$$\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k),$$
  • >where the free occurences of variable $x$ in $K'$ and $k'$ are bound by the binding operators $(x:K)$ and $[x:K]$, respectively.
  • (Note:$[x:K]b$ means $\lambda x:K. b$ and $(x:K)K'$ means $\Pi x:K.K'$.)
  • > There are five forms of judgements in LF:
  • >
  • > [![enter image description here][1]][1]
  • And then he gives rules for infering judgements in LF ([one][2] and [two][3]).
  • > In general, a specification of a type theory will consist of a collection of declarations of new constants and a collection of computation rules (usually about the new constants).
  • For example, sigma types can be specified by declaring the following constants:
  • [![enter image description here][4]][4]
  • ----
  • I have several questions about this set up, but the most basic (and general) one is this: suppose somebody wants to define a type theory with sigma types. What are the benefits of invoking LF to define it?
  • As far as I understand, there's a more "direct" way to define it -- for example, as in Appendix A2 of the HoTT book, where sigma types are introduced by giving rules of formation, introduction, elimination, computation (as opposed to introducing several "constants").
  • Also, in Lungu's PhD thesis "Subtyping in signatures" written under Luo, she uses Luo's LF (defined above and introduces constants for product types ([pic][6]), as well as inference rules for product types ([pic][7]). This makes me even more confused -- if, when using a logical framework to specify a particular type theory, we still need to provide inference rules even for things that are incoded in the logical framework (I'm talking about dependent product types), why do we need the logical framework at all? It looks like it's some extraneous mechanism that makes specifying type theories more complicated (although I'm sure it's the opposite, I just don't understand why).
  • Further, if why does one only take dependent product types and lambda terms as basic terms in LF, why not include sigma types as well as their inhabitants as basic terms in LF? I suppose the reason is that most dependent type theories have product types but not necessarily sigma types, but if we were to specify a type theory with sigma types, would it be reasonable to consider a logical framework which is obtained from the framework mentioned at the beginning by adding sigma types and their inhabitants as terms?
  • [1]: https://i.stack.imgur.com/N7tOU.png
  • [2]: https://i.stack.imgur.com/CpaoL.png
  • [3]: https://i.stack.imgur.com/NHhx0.png
  • [4]: https://i.stack.imgur.com/mfyny.png
  • [5]: https://math.stackexchange.com/a/4638042/1048887
  • [6]: https://i.stack.imgur.com/q2VZx.png
  • [7]: https://i.stack.imgur.com/jKfSV.png
  • I'm trying to understand the notion of a logical framework and how/why/when it's used to define type theories. I'm looking at Luo's "Computation and Reasoning" (1994), where he considers "LF, a typed version of Martin-Lof's logical framework".
  • >LF is a simple type system with terms of the following forms:
  • >$$\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k),$$
  • >where the free occurences of variable $x$ in $K'$ and $k'$ are bound by the binding operators $(x:K)$ and $[x:K]$, respectively.
  • (Note:$[x:K]b$ means $\lambda x:K. b$ and $(x:K)K'$ means $\Pi x:K.K'$.)
  • > There are five forms of judgements in LF:
  • >
  • > [![enter image description here][1]][1]
  • And then he gives rules for infering judgements in LF ([one][2] and [two][3]).
  • > In general, a specification of a type theory will consist of a collection of declarations of new constants and a collection of computation rules (usually about the new constants).
  • For example, sigma types can be specified by declaring the following constants:
  • [![enter image description here][4]][4]
  • ----
  • I have several questions about this set up, but the most basic (and general) one is this: suppose somebody wants to define a type theory with sigma types. What are the benefits of invoking LF to define it?
  • As far as I understand, there's a more "direct" way to define it -- for example, as in Appendix A2 of the HoTT book, where sigma types are introduced by giving rules of formation, introduction, elimination, computation (as opposed to introducing several "constants").
  • Also, in Lungu's PhD thesis "Subtyping in signatures" written under Luo, she uses Luo's LF (defined above and introduces constants for product types ([pic][6]), as well as inference rules for product types ([pic][7]). This makes me even more confused -- if, when using a logical framework to specify a particular type theory, we still need to provide inference rules even for things that are encoded in the logical framework as basic terms (I'm talking about dependent product types and lambda terms - they are basic terms in LF by the first definition above), why do we need the logical framework at all? It looks like it's some extraneous mechanism that makes specifying type theories more complicated (although I'm sure it's the opposite, I just don't understand why).
  • Further, if why does one only take dependent product types and lambda terms as basic terms in LF, why not include sigma types as well as their inhabitants as basic terms in LF? I suppose the reason is that most dependent type theories have product types but not necessarily sigma types, but if we were to specify a type theory with sigma types, would it be reasonable to consider a logical framework which is obtained from the framework mentioned at the beginning by adding sigma types and their inhabitants as terms?
  • [1]: https://i.stack.imgur.com/N7tOU.png
  • [2]: https://i.stack.imgur.com/CpaoL.png
  • [3]: https://i.stack.imgur.com/NHhx0.png
  • [4]: https://i.stack.imgur.com/mfyny.png
  • [5]: https://math.stackexchange.com/a/4638042/1048887
  • [6]: https://i.stack.imgur.com/q2VZx.png
  • [7]: https://i.stack.imgur.com/jKfSV.png
#1: Initial revision by user avatar user205‭ · 2023-03-26T14:49:03Z (over 1 year ago)
The purpose of logical frameworks in specifying type theories
I'm trying to understand the notion of a logical framework and how/why/when it's used to define type theories. I'm looking at Luo's "Computation and Reasoning" (1994), where he considers "LF, a typed version of Martin-Lof's logical framework".

>LF is a simple type system with terms of the following forms: 
>$$\textbf {Type}, El(A), (x:K)K', [x:K]k', f(k),$$
>where the free occurences of variable $x$ in $K'$ and $k'$ are bound by the binding operators $(x:K)$ and $[x:K]$, respectively. 

(Note:$[x:K]b$ means $\lambda x:K. b$ and $(x:K)K'$ means $\Pi x:K.K'$.)

> There are five forms of judgements in LF:
>
> [![enter image description here][1]][1]

And then he gives rules for infering judgements in LF ([one][2] and [two][3]).

> In general, a specification of a type theory will consist of a collection of declarations of new constants and a collection of computation rules (usually about the new constants). 

For example, sigma types can be specified by declaring the following constants: 

[![enter image description here][4]][4]

----

I have several questions about this set up, but the most basic (and general) one is this: suppose somebody wants to define a type theory with sigma types. What are the benefits of invoking LF to define it? 
 As far as I understand, there's a more "direct" way to define it -- for example, as in Appendix A2 of the HoTT book, where sigma types are introduced by giving rules of formation, introduction, elimination, computation (as opposed to introducing several "constants"). 

Also, in Lungu's PhD thesis "Subtyping in signatures" written under Luo, she uses Luo's LF (defined above and introduces constants for product types ([pic][6]), as well as inference rules for product types ([pic][7]). This makes me even more confused -- if, when using a logical framework to specify a particular type theory, we still need to provide inference rules even for things that are incoded in the logical framework (I'm talking about dependent product types), why do we need the logical framework at all? It looks like it's some extraneous mechanism that makes specifying type theories more complicated (although I'm sure it's the opposite, I just don't understand why). 

Further, if why does one only take dependent product types and lambda terms as basic terms in LF, why not include sigma types as well as their inhabitants as basic terms in LF? I suppose the reason is that most dependent type theories have product types but not necessarily sigma types, but if we were to specify a type theory with sigma types, would it be reasonable to consider a logical framework which is obtained from the framework mentioned at the beginning by adding sigma types and their inhabitants as terms?






  [1]: https://i.stack.imgur.com/N7tOU.png
  [2]: https://i.stack.imgur.com/CpaoL.png
  [3]: https://i.stack.imgur.com/NHhx0.png
  [4]: https://i.stack.imgur.com/mfyny.png
  [5]: https://math.stackexchange.com/a/4638042/1048887
  [6]: https://i.stack.imgur.com/q2VZx.png
  [7]: https://i.stack.imgur.com/jKfSV.png