What inference rules are invalid in empty L-structures?











up vote
0
down vote

favorite












I want to understand why empty L-structures are illegal/not allowed in classical FOL. I read here but didn't understand the answer. It seems to be that there is some inference rule in FOL (as described in these notes) that is illegal when the set in the L-structure is empty. But I don't understand why.




Is because of some inference rule with quantifiers that L-structures are illegal or is there a different reason why empty L-structures are disallowed?




I also asked:



Is the reason that vacuous statements are True because empty L-structures are illegal?



as a suggestion that perhaps vacuous statements are the reason but I still don't have an answer for that that explains it. Perhaps its the inference rule. Perhaps its something else. I don't know yet.










share|cite|improve this question


























    up vote
    0
    down vote

    favorite












    I want to understand why empty L-structures are illegal/not allowed in classical FOL. I read here but didn't understand the answer. It seems to be that there is some inference rule in FOL (as described in these notes) that is illegal when the set in the L-structure is empty. But I don't understand why.




    Is because of some inference rule with quantifiers that L-structures are illegal or is there a different reason why empty L-structures are disallowed?




    I also asked:



    Is the reason that vacuous statements are True because empty L-structures are illegal?



    as a suggestion that perhaps vacuous statements are the reason but I still don't have an answer for that that explains it. Perhaps its the inference rule. Perhaps its something else. I don't know yet.










    share|cite|improve this question
























      up vote
      0
      down vote

      favorite









      up vote
      0
      down vote

      favorite











      I want to understand why empty L-structures are illegal/not allowed in classical FOL. I read here but didn't understand the answer. It seems to be that there is some inference rule in FOL (as described in these notes) that is illegal when the set in the L-structure is empty. But I don't understand why.




      Is because of some inference rule with quantifiers that L-structures are illegal or is there a different reason why empty L-structures are disallowed?




      I also asked:



      Is the reason that vacuous statements are True because empty L-structures are illegal?



      as a suggestion that perhaps vacuous statements are the reason but I still don't have an answer for that that explains it. Perhaps its the inference rule. Perhaps its something else. I don't know yet.










      share|cite|improve this question













      I want to understand why empty L-structures are illegal/not allowed in classical FOL. I read here but didn't understand the answer. It seems to be that there is some inference rule in FOL (as described in these notes) that is illegal when the set in the L-structure is empty. But I don't understand why.




      Is because of some inference rule with quantifiers that L-structures are illegal or is there a different reason why empty L-structures are disallowed?




      I also asked:



      Is the reason that vacuous statements are True because empty L-structures are illegal?



      as a suggestion that perhaps vacuous statements are the reason but I still don't have an answer for that that explains it. Perhaps its the inference rule. Perhaps its something else. I don't know yet.







      logic first-order-logic predicate-logic






      share|cite|improve this question













      share|cite|improve this question











      share|cite|improve this question




      share|cite|improve this question










      asked Nov 17 at 20:46









      Pinocchio

      1,85821753




      1,85821753






















          2 Answers
          2






          active

          oldest

          votes

















          up vote
          1
          down vote













          In empty structures, $forall x Fx$ is vacuously true, $exists x Fx$ is false. So $forall x Fx therefore exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.



          What to do?



          The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.



          A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($forall x Fx$) rather than binary (something like $forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $forall x (Fx to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.



          Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.



          Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).



          But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.



          But as he sets things up, this does lead to an odd result. In his system we apparently get





          1. $forall x Fx vdash Fn$

          2. $Fn vdash exists x Fx$

          3. $forall x Fx nvdash exists x Fx$


          Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $forall x Fx vdash exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $alpha therefore beta$ as depending for its validity only on the content of $alpha$ and $beta$.



          You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.






          share|cite|improve this answer






























            up vote
            0
            down vote













            If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)



            I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$frac{Gammavdashforall x.varphi(x)}{Gammavdashvarphi(t)}rlap{small{forall I,t}}$$ where $Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $bot$ from $forall x.bot$, i.e. proving $negforall x.bot$ which is equivalent to $exists x.top$, i.e. the domain being non-empty.



            The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $Gammavdash_V psi$ to mean $Gammavdashpsi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $Gammavdash_Vpsi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $Gammavdash_varnothingpsi$.



            To (indirectly) show that the domain is non-empty, we'd want to show $vdash_varnothing negforall x.bot$. Here's an attempted derivation: $$dfrac{dfrac{dfrac{}{forall x.botvdash_varnothingforall x.bot}rlap{small{Ax}}}{forall x.botvdash_varnothingbot}rlap{small{forall I,?}}}{vdash_varnothingnegforall x.bot}rlap{small{{to}I}}$$



            This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.



            What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.



            We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.



            So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.






            share|cite|improve this answer



















            • 1




              Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
              – DanielV
              Nov 17 at 23:52








            • 1




              Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
              – Derek Elkins
              Nov 18 at 0:41








            • 2




              @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
              – Alex Kruckman
              Nov 18 at 19:52








            • 1




              @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
              – Derek Elkins
              Nov 19 at 1:56








            • 1




              @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
              – Derek Elkins
              Nov 19 at 1:59











            Your Answer





            StackExchange.ifUsing("editor", function () {
            return StackExchange.using("mathjaxEditing", function () {
            StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
            StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
            });
            });
            }, "mathjax-editing");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "69"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            noCode: true, onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3002791%2fwhat-inference-rules-are-invalid-in-empty-l-structures%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            2 Answers
            2






            active

            oldest

            votes








            2 Answers
            2






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes








            up vote
            1
            down vote













            In empty structures, $forall x Fx$ is vacuously true, $exists x Fx$ is false. So $forall x Fx therefore exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.



            What to do?



            The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.



            A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($forall x Fx$) rather than binary (something like $forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $forall x (Fx to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.



            Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.



            Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).



            But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.



            But as he sets things up, this does lead to an odd result. In his system we apparently get





            1. $forall x Fx vdash Fn$

            2. $Fn vdash exists x Fx$

            3. $forall x Fx nvdash exists x Fx$


            Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $forall x Fx vdash exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $alpha therefore beta$ as depending for its validity only on the content of $alpha$ and $beta$.



            You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.






            share|cite|improve this answer



























              up vote
              1
              down vote













              In empty structures, $forall x Fx$ is vacuously true, $exists x Fx$ is false. So $forall x Fx therefore exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.



              What to do?



              The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.



              A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($forall x Fx$) rather than binary (something like $forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $forall x (Fx to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.



              Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.



              Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).



              But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.



              But as he sets things up, this does lead to an odd result. In his system we apparently get





              1. $forall x Fx vdash Fn$

              2. $Fn vdash exists x Fx$

              3. $forall x Fx nvdash exists x Fx$


              Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $forall x Fx vdash exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $alpha therefore beta$ as depending for its validity only on the content of $alpha$ and $beta$.



              You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.






              share|cite|improve this answer

























                up vote
                1
                down vote










                up vote
                1
                down vote









                In empty structures, $forall x Fx$ is vacuously true, $exists x Fx$ is false. So $forall x Fx therefore exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.



                What to do?



                The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.



                A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($forall x Fx$) rather than binary (something like $forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $forall x (Fx to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.



                Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.



                Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).



                But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.



                But as he sets things up, this does lead to an odd result. In his system we apparently get





                1. $forall x Fx vdash Fn$

                2. $Fn vdash exists x Fx$

                3. $forall x Fx nvdash exists x Fx$


                Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $forall x Fx vdash exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $alpha therefore beta$ as depending for its validity only on the content of $alpha$ and $beta$.



                You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.






                share|cite|improve this answer














                In empty structures, $forall x Fx$ is vacuously true, $exists x Fx$ is false. So $forall x Fx therefore exists x Fx$ will count as invalid, if empty structures are allowed. Standard predicate logic takes this to be valid.



                What to do?



                The principles of logic aren't handed down, once and for all, on tablets of stone. Rather, choice of logical system involves cost-benefit assessments -- e.g. we buy ease of manipulation here at the cost of unnaturalness or artificiality there.



                A familiar example: recall that in our standard formal language for predicate logic the basic form of quantifiers is unary ($forall x Fx$) rather than binary (something like $forall x (Fx : Gx)$). The cost of this is that we have to render restricted quantifications using connectives, so 'All $F$ are $G$' has to be rendered as $forall x (Fx to Gx)$. Warranting a syllogism in Barbara then comes to depend on the logic of conditionals.



                Now, on reflection, isn't it implausible that we should understand Barbara as valid in virtue of something as contentious as the logic of conditionals?! Yet we normally cheerfully pay the price, thinking that the gain in simplicity we get by sticking with unary quantifiers is worth the cost in a certain artificiality in the rendering of restricted quantifiers.



                Similarly, we standardly assume in our predicate logic that names all have references and domains are populated.That buys us some simplicity at the cost of a offending against the principle that logic should be topic neutral and the validity of inferences shouldn't depend on how the world actually is. And again, we normally cheerfully pay the price, thinking that the gain in simplicity is worth it (have a look at varieties of "free logic" to see what the cost is of different strategies for allowing denotation-less names and empty domains).



                But, having said that, if we keep the usual assumption that names have denotations but we allow that domains can be empty for a signature that lacks names then you can get a tidy system of logic (obviously for a signature with names, if we are assuming names have denotations, we better have a populated domain for these denotations to live in!). If fact, the elementary textbook Logic by Wilfrid Hodges has a tableaux system of this kind.



                But as he sets things up, this does lead to an odd result. In his system we apparently get





                1. $forall x Fx vdash Fn$

                2. $Fn vdash exists x Fx$

                3. $forall x Fx nvdash exists x Fx$


                Which looks like a failure of transitivity of entailment. So save transitivity, we'd have to say that $forall x Fx vdash exists x Fx$ holds when working in a signature which has (denoting) names, but fails in a signature without names where the domain can be empty. But normally we think of an argument $alpha therefore beta$ as depending for its validity only on the content of $alpha$ and $beta$.



                You pays your money and you makes your choices. The conventional assumption of predicate logic (names have references, domains are populated) reflects the suppositions that are usually in play as we argue -- we suppose, at least temporarily for the sake of argument, that our talk isn't about nothing -- and buys simplicity of operation. For most purposes, the price is right. Or so goes the usual story.







                share|cite|improve this answer














                share|cite|improve this answer



                share|cite|improve this answer








                edited Nov 18 at 0:35

























                answered Nov 18 at 0:13









                Peter Smith

                40.2k339118




                40.2k339118






















                    up vote
                    0
                    down vote













                    If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)



                    I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$frac{Gammavdashforall x.varphi(x)}{Gammavdashvarphi(t)}rlap{small{forall I,t}}$$ where $Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $bot$ from $forall x.bot$, i.e. proving $negforall x.bot$ which is equivalent to $exists x.top$, i.e. the domain being non-empty.



                    The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $Gammavdash_V psi$ to mean $Gammavdashpsi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $Gammavdash_Vpsi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $Gammavdash_varnothingpsi$.



                    To (indirectly) show that the domain is non-empty, we'd want to show $vdash_varnothing negforall x.bot$. Here's an attempted derivation: $$dfrac{dfrac{dfrac{}{forall x.botvdash_varnothingforall x.bot}rlap{small{Ax}}}{forall x.botvdash_varnothingbot}rlap{small{forall I,?}}}{vdash_varnothingnegforall x.bot}rlap{small{{to}I}}$$



                    This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.



                    What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.



                    We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.



                    So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.






                    share|cite|improve this answer



















                    • 1




                      Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                      – DanielV
                      Nov 17 at 23:52








                    • 1




                      Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                      – Derek Elkins
                      Nov 18 at 0:41








                    • 2




                      @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                      – Alex Kruckman
                      Nov 18 at 19:52








                    • 1




                      @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                      – Derek Elkins
                      Nov 19 at 1:56








                    • 1




                      @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                      – Derek Elkins
                      Nov 19 at 1:59















                    up vote
                    0
                    down vote













                    If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)



                    I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$frac{Gammavdashforall x.varphi(x)}{Gammavdashvarphi(t)}rlap{small{forall I,t}}$$ where $Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $bot$ from $forall x.bot$, i.e. proving $negforall x.bot$ which is equivalent to $exists x.top$, i.e. the domain being non-empty.



                    The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $Gammavdash_V psi$ to mean $Gammavdashpsi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $Gammavdash_Vpsi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $Gammavdash_varnothingpsi$.



                    To (indirectly) show that the domain is non-empty, we'd want to show $vdash_varnothing negforall x.bot$. Here's an attempted derivation: $$dfrac{dfrac{dfrac{}{forall x.botvdash_varnothingforall x.bot}rlap{small{Ax}}}{forall x.botvdash_varnothingbot}rlap{small{forall I,?}}}{vdash_varnothingnegforall x.bot}rlap{small{{to}I}}$$



                    This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.



                    What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.



                    We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.



                    So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.






                    share|cite|improve this answer



















                    • 1




                      Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                      – DanielV
                      Nov 17 at 23:52








                    • 1




                      Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                      – Derek Elkins
                      Nov 18 at 0:41








                    • 2




                      @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                      – Alex Kruckman
                      Nov 18 at 19:52








                    • 1




                      @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                      – Derek Elkins
                      Nov 19 at 1:56








                    • 1




                      @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                      – Derek Elkins
                      Nov 19 at 1:59













                    up vote
                    0
                    down vote










                    up vote
                    0
                    down vote









                    If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)



                    I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$frac{Gammavdashforall x.varphi(x)}{Gammavdashvarphi(t)}rlap{small{forall I,t}}$$ where $Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $bot$ from $forall x.bot$, i.e. proving $negforall x.bot$ which is equivalent to $exists x.top$, i.e. the domain being non-empty.



                    The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $Gammavdash_V psi$ to mean $Gammavdashpsi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $Gammavdash_Vpsi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $Gammavdash_varnothingpsi$.



                    To (indirectly) show that the domain is non-empty, we'd want to show $vdash_varnothing negforall x.bot$. Here's an attempted derivation: $$dfrac{dfrac{dfrac{}{forall x.botvdash_varnothingforall x.bot}rlap{small{Ax}}}{forall x.botvdash_varnothingbot}rlap{small{forall I,?}}}{vdash_varnothingnegforall x.bot}rlap{small{{to}I}}$$



                    This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.



                    What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.



                    We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.



                    So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.






                    share|cite|improve this answer














                    If we do a careful analysis of the rules of inference for quantifiers in natural deduction or the sequent calculus, we see that the rules for quantifiers don't require the domain to be non-empty. (So this is one answer for what [primitive] rules are valid in an empty domain.)



                    I'll focus on the universal elimination rule. (Dually, the existential introduction rule has the same issue.) It is often written as $$frac{Gammavdashforall x.varphi(x)}{Gammavdashvarphi(t)}rlap{small{forall I,t}}$$ where $Gamma$ is a finite set of formulas and $t$ is some term. Written this way, this rule seems to warrant deriving $bot$ from $forall x.bot$, i.e. proving $negforall x.bot$ which is equivalent to $exists x.top$, i.e. the domain being non-empty.



                    The "trick" is that to apply this rule of inference, we need to have a term, $t$, even if it doesn't occur in the the resulting formula. Here's where a more careful analysis can be helpful. Write $Gammavdash_V psi$ to mean $Gammavdashpsi$ except that all formulas are only allowed to contain free variables in the set $V$. (Really, $Gammavdash_Vpsi$ is the more primitive notion and is useful for defining the universal introduction rule.) More cleanly, we can define a set of terms with free variables in a given set $V$, and (thus) a set of formulas with free variables in $V$. $vdash_V$ is then a (meta-)relation on sets of formulas with free variables in $V$ and a formula with free variables in $V$. A (conditional) proof is then a derivation of a formula with no free variables, i.e. $Gammavdash_varnothingpsi$.



                    To (indirectly) show that the domain is non-empty, we'd want to show $vdash_varnothing negforall x.bot$. Here's an attempted derivation: $$dfrac{dfrac{dfrac{}{forall x.botvdash_varnothingforall x.bot}rlap{small{Ax}}}{forall x.botvdash_varnothingbot}rlap{small{forall I,?}}}{vdash_varnothingnegforall x.bot}rlap{small{{to}I}}$$



                    This derivation fails because, assuming there are no constants in our signature, we have no terms (with no free variables) at all to use to apply universal elimination.



                    What happens in many contexts, particularly for Hilbert-style systems, is the set of terms always contains all possible variables. In that context, it is always possible to find a term: just pick some free variable. While indexing terms and formulas by sets of free variables is slightly more complicated than simply having a single set of terms/formulas, I find it to be significantly cleaner and clearer.



                    We get the semantic analogue of the above as follows. If we the set of terms always includes all free variables, then to interpret terms we must say what is done with free variables. This means our interpretation function must specify the values for (typically countably infinitly many) free variables. This forces our semantic domain to be non-empty and is rather awkward. For example, we have uncountably many effectively equivalent interpretations even when we pick a finite semantic domain. If we index our interpretation functions by sets of free variables, then we only need to give values for the finite set of free variables we actually use. In particular, when that set is empty, we only need to give values for closed terms. Since there may be no closed terms, this allows the domain to be empty.



                    So, as Eric Wofsey said as a comment in your other question, it's a bit of a "historical artifact". Quite possibly from something that was somewhat accidental. The core invalid rule, at least from the perspective of the natural deduction/sequent calculus, is doing universal elimination/existential introduction in a context where you have no terms. There are, of course, infinitely many derivable rules from this invalid rule that would also be invalid.







                    share|cite|improve this answer














                    share|cite|improve this answer



                    share|cite|improve this answer








                    edited Nov 19 at 1:20

























                    answered Nov 17 at 21:52









                    Derek Elkins

                    15.9k11336




                    15.9k11336








                    • 1




                      Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                      – DanielV
                      Nov 17 at 23:52








                    • 1




                      Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                      – Derek Elkins
                      Nov 18 at 0:41








                    • 2




                      @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                      – Alex Kruckman
                      Nov 18 at 19:52








                    • 1




                      @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                      – Derek Elkins
                      Nov 19 at 1:56








                    • 1




                      @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                      – Derek Elkins
                      Nov 19 at 1:59














                    • 1




                      Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                      – DanielV
                      Nov 17 at 23:52








                    • 1




                      Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                      – Derek Elkins
                      Nov 18 at 0:41








                    • 2




                      @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                      – Alex Kruckman
                      Nov 18 at 19:52








                    • 1




                      @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                      – Derek Elkins
                      Nov 19 at 1:56








                    • 1




                      @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                      – Derek Elkins
                      Nov 19 at 1:59








                    1




                    1




                    Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                    – DanielV
                    Nov 17 at 23:52






                    Are you suggesting a set of rules such that, for example, $forall x . Px to exists x.Px$ would not be derivable? If so, how would you prove $(exists y forall x.Pxy) to (forall xexists y.Pxy)$ in such a system? Would you like me to ask this as a separate question?
                    – DanielV
                    Nov 17 at 23:52






                    1




                    1




                    Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                    – Derek Elkins
                    Nov 18 at 0:41






                    Yes, the former would not be derivable with $V=varnothing$ as in that case you'd semantically get true implies false (given an empty domain) which you don't want. For the second formula, you use universal introduction which looks like $dfrac{Gammavdash_{{x}cup V}varphi(x)}{Gammavdash_V forall x.varphi(x)}$, similarly for existential elimination. This will make $x$ and $y$ available as terms you can use in universal elimination and existential introduction. Nothing particularly tricky happens.
                    – Derek Elkins
                    Nov 18 at 0:41






                    2




                    2




                    @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                    – Alex Kruckman
                    Nov 18 at 19:52






                    @DanielV In my model theory course this semester, we started off by proving the completeness theorem in a semantics which allows empty structures. The proof system is along the lines of what Derek Elkins suggests in this answer. You can see it in Chapter 3 of my lectures notes (pp. 20-30), available here: pages.iu.edu/~akruckma/M682.html
                    – Alex Kruckman
                    Nov 18 at 19:52






                    1




                    1




                    @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                    – Derek Elkins
                    Nov 19 at 1:56






                    @CarlMummert The inference rules are universal elimination and existential introduction when you have no terms. Those are the rules that traditional descriptions have that the description I gave lacks. We could make these explicit as $dfrac{Gammavdashforall x.varphi}{Gammavdashvarphi}$ where $x$ doesn't occur in $varphi$, and similarly for $exists I$. A more accurate description of what's happening in the traditional case is that free variables behave like constants and traditional descriptions only consider cases with an infinite supply of constants (which force a non-empty domain).
                    – Derek Elkins
                    Nov 19 at 1:56






                    1




                    1




                    @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                    – Derek Elkins
                    Nov 19 at 1:59




                    @CarlMummert Neither Eric Wofsey nor I said that it was only a historical artifact, but I, at least, do think the traditional presentation incorporates it in a subtle, non-obvious way and that it would be better if it was a more clearly separated assumption. I also think that if you set out to define a type of precisely closed well-formed formulas, you end up wanting well-formed formulas and terms with a given set of free variables. Once you make this design decision it strongly suggests rules and semantics that do allow an empty domain and the traditional approach becomes unnatural.
                    – Derek Elkins
                    Nov 19 at 1:59


















                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Mathematics Stack Exchange!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    Use MathJax to format equations. MathJax reference.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3002791%2fwhat-inference-rules-are-invalid-in-empty-l-structures%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    AnyDesk - Fatal Program Failure

                    How to calibrate 16:9 built-in touch-screen to a 4:3 resolution?

                    QoS: MAC-Priority for clients behind a repeater