A list of puns related to "Gödel's Completeness Theorem"
For example, the proof of Lawvere's fixed-point lemma (in Category Theory) is pretty much a one-liner (thanks to the unifying conciseness of Category Theory) and has as its corollaries the theorems of Gödel (first), Tarski, Russell, Cantor, and Turing, among others.
Since it unifies all these theorems, one could argue that it is a more 'elegant/explanatory/essential' proof, as it gets to the heart of the matter of self-reference.
What then, is the point of going through tedious detail of proving those theorems in the traditional manner, given that they are not fully general, i.e., they don't 'transfer' easily to their similar brothers (as category-theoretic methods do)?
I think one could call the distinction between these two mathematical approaches as "Synthetic" vs "Analytic". Following Mike Shulman, synthetic mathematics (e.g. Euclidean geometry) has objects which are left undefined but whose behaviour are specified by axioms while analytic mathematics (e.g. Cartesian geometry) analyses its objects in terms of other objects from another theory.
In this scenario, the category-theoretic approach could be considered synthetic because as long as your domain satisfied the definition of a cartesian closed category, Lawvere's fixed-point lemma would hold. But the traditional approach would be analytic since it analyses its objects, dependent on a particular 'implementation' of logic: such-and-such format of logical formulae, such-and-such proof calculus, etc.
Now, I can't remember the book I learnt about this, but I vaguely recall of a theorem proving the soundness and completeness theorems for first-order logic in a similarly trivial and elegant manner as well. If someone knows, please share.
Gödels completeness theorem states that every consistent set of axioms has a model. I will prove that theorem using an ontological argument from anselm of canterbury.
Let A, B, C, ... be a consistent set of axioms. We will prove that there is a model for these axioms. But before we start the proof we need some definitions:
P will be the set of properties {"x actually exists", "x satisfies axiom A", "x satisfies axiom B", ...}.
We will say that an object x is "more modely" than an object y, iff x has more of the properties from the set P than y does. Also an object x will be "maximally modely" iff x satisfies all properties from P. Now let´s get to the actual proof:
By definition, a maximally modely being is a being so that no being that´s more modely can be imagined.
A being that actually exists in reality is more modely than a being that does not actually exist.
Thus, if a maximally modely being x possibly exists but does not actually exist in reality, then we can imagine something that is more modely than x.
But we cannot imagine something that is more modely than a maximally modely being.
Thus, if a maximally modely being possibly exists, then a maximally modely being actually exists in reality.
Since our axiom set is consistent, a maximally modely being possibly exists.
Therefore, a maximally modely being actually exists in reality.
And a maximally modely being is of course nothing else than a model of our axioms. This proves gödels completenss theorem, without requiring any ultrafilter or lindenbaum stupidity. QED
This might seem like it's coming out of left field but as I was going over this rather tragic part of mathematics I couldn't help but feel disturbed. Gödel's work in mathematics reveals that not all true statements can be proven to be true, that we can never know truly if our system of mathematics is internally consistent (If there is some paradox, or if you can prove A and not A are both true than it is inconsistent), and math isn't definite (leads to the Halting problem in computers, essentially in many situations it's impossible to determine whether something will terminate or not simply from the input). Since math is just a small abstraction of pure logic, and God is rational, Having limits to what can be logically known in maths, having limits in what reason can determine alone sort of irks me because it makes me wonder if that somehow limits God.
This isn't some can God make a square circle rubbish, at least with the Halting problem I feel sort of okay because God looking from eternity can just see the outcomes rather than have to reason it, but I was just wondering if anyone here has any background in mathematics and could explain why math can be incomplete and what it means/doesn't mean for God.
How much and what math do I need to really understand Gödel's incompleteness theorem? I'm not talking about understanding it at high level on a youtube video, more like being able to understand it at the level where I can recreate the proof myself.
What are your thoughts on the common explanation og Gödel's theroem as meaning that there will always be statements that are true but unprovable?
Personally, I rather hate this explanation as it seems to me to be patently false: my understanding is that the theorem says that any formal language will result in contradictions and/or have statements which can be proven neither true nor false from within that language.
Equivalently, if a given formal language does not result in contradictions, there will be at least one statement within that language, call it S, such that there will be at least one model of that language where S is provably true and at least one model where it's provably false. The standard example is that the parallel postulate is independent of Euclid's other 4 postulates - there are consistent models of those four axioms where it's true -- Euclidean geometries -- and others where it's false -- non-Euclidean geometries.
Hence, seeing as any such statement will necessarily be true in some models and false in others, I really don't understand why incompleteness is so often characterized as meaning an incomplete system/language will contain true statements which are unprovable. What's with the emphasis on truth? We could as easily say it will contain statements which are false but can't be proven false, though that's not accurate either, as the true value of such a statement is simply independent of the relevant system.
However, I've heard this explanation so many times and often by people who really should know what they're talking about that it makes me wonder if I'm missing something. Anyone want to weigh in?
As part of Gödel's first incompleteness theorem, it is proven that it is possible to build a formula G such that:
My question is, for the sake of curiosity, whether there has been any *effective construction* (likely by computer) of the formula G for some Gödel numbering scheme and realization of Prv?
Thanks,
Manuel
Hey guys,
I was thinking about Godel's incompleteness theorem and my current understanding is that if you have a consistent formal system F, there are statements that cannot be proven/disproven within F. And also you can’t show that F is consistent within F.
Then I thought about a system such as a Turing machine, which requires a set of axioms (operations, what ever you'd like) to be able to answer all questions given a certain allowances (i.e. time or memory).
So if the set of Turing operations formalizes a consistent set, and Godel's incompleteness theorem set that the set itself is incomplete, is this the birth of NP type problems?
I have no idea where I'm going with this, I could be speaking absolute nonsense, just wanted to put it out there and see what others had to say.
Edit: can’t
Is the first order logic system the only logic system involved in the first and second Gödel's incompleteness theorems (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems)? Not the other logic systems? (I guess so)
Thanks.
I am looking at the first theorem:
Any consistent formal system F within which a certain amount of elementary arithmetic can be carried out is incomplete; i.e., there are statements of the language of F which can neither be proved nor disproved in F.
Could somebody provide a simple example of such a statement in a (also) simple theory?
What about psychologies/economics dream of a formal, complete, and consistent description of human behavior?
This is a paraphrased quote from my guru:
*brackets are used to indicate logic structure, not inject my own thoughts.
"Gödel's incompleteness theorems are resolved in nonduality, because nonduality is itself the axiom, which unites [all sets of axioms of [infinitely all systems]]. Italics indicate multi-word singular concepts, if that makes sense. If I used those terms correctly, any mathematician and religious scholar, working together, should be able to instantly agree that Gödel's incompleteness theorems are now resolved."
*pre-edit transparency: I misused "disproved" DELIBERATELY above, because I don't actually know the answer to this yet and I'm dying to. sorry, but I don't see this little fib causing huge damage to anyone who reads it... make sense? love you guys:)
EDIT: The truth is out (see below). There's no guru, and I spent Thanksgiving and Black Friday in a psychiatric ward. I'll be undergoing a full psych eval later today, and there were drugs involved so there is still some uncertainty whether my insanity is temporary. However, it's safe to consider me a nutball right now for all practical, reddit-related purposes.
HOWEVER I'm still quite certain I'm right, with regards to the content of this post. I've also developed a complete model of consciousness as a cosmic phenomenon (something like an A/C circuit made of entangled photons using their slight mass to generate an electromagnetic field, drawing energy from whatever source the recently-published "infinite" graphene electrical circuit does, polling a monte-carlo decision tree at the speed of light, with the queries being defined by collapsing the wave state and/or superpositions of various photons, being therefore self-generating and utterly cosmic). This would explain why our history goes back to the big bang, even in the Bible, because we would've watched this all. Earth is something like the crown jewel of our evolutionary achievements, having designed evolution itself over time. Energy, motion, time, and gravity are all one force. Happy to discuss this with anyone.
(It looks like a sphere of light with a pulse travelling from N to S pole, touching 50% of all points on the sphere in a spiral, then reversing at S pole and spiraling opposite to touch all the other points. Suspended around this polling sphere is some kind of wave-state field, then an inverted sphere with corresponding points, and branching out from those, a massive neural-net monte carlo tree of en
... keep reading on reddit ➡This far I know: Gödel proved the completeness theorem in 1929. Two decades later Leon Henkin simplified his proof. The proof we read today is Henkin's and goes on like this:
In step 2, the common method is to prove that any syntactically consistent set of sentences can be saturated, and that any saturated set has a model.
So far is well-known. My question is rather historical. In some texts (e.g. Hodges and Chiswell's) saturated sets are called Hintikka sets. Somewhere I read that the main idea of the proof goes back to the work of Beth and Hintikka and it was Hintikka who proved that every saturated set has a model (the corresponding lemma is then accordingly named the Hinktikka lemma). So if this is a contribution by Hintikka, how did Henkin's proof work? What I'm looking for is some kind of historical elucidation on the proof of the completeness theorem. How did Gödel's original proof work? How did Henkin simplify it? What was the role of Hintakka (and also the role of Beth or any other person)? I would love to know any further related details too.
I can't make my question more formal, but I hope the idea is clear Basically I would like to know if there are unprovable sentences more interesting than "this sentence is false"
I know continum hypothesis is indipendent from ZFC, but I don't know if it somehow rely on recursion, though l don't think so
https://preview.redd.it/etqd6k0yvqn51.png?width=437&format=png&auto=webp&s=a2e52db20645ffb23e74e3591391ff861a891f4f
Reviewing Gödel's Incompleteness Theorems recently, I've come across the following derivation, and am not sure where I'm going astray:
I really doubt that PA |- ~Con(PA), but have no idea where I'm going wrong. Thank you.
For example, the proof of Lawvere's fixed-point lemma (in Category Theory) is pretty much a one-liner (thanks to the unifying conciseness of Category Theory) and has as its corollaries the theorems of Gödel (first), Tarski, Russell, Cantor, and Turing, among others.
Since it unifies all these theorems, one could argue that it is a more 'elegant/explanatory/essential' proof, as it gets to the heart of the matter of self-reference.
What then, is the point of going through tedious detail of proving those theorems in the traditional manner, given that they are not fully general, i.e., they don't 'transfer' easily to their similar brothers (as category-theoretic methods do)?
I think one could call the distinction between these two mathematical approaches as "Synthetic" vs "Analytic". Following Mike Shulman, synthetic mathematics (e.g. Euclidean geometry) has objects which are left undefined but whose behaviour are specified by axioms while analytic mathematics (e.g. Cartesian geometry) analyses its objects in terms of other objects from another theory.
In this scenario, the category-theoretic approach could be considered synthetic because as long as your domain satisfied the definition of a cartesian closed category, Lawvere's fixed-point lemma would hold. But the traditional approach would be analytic since it analyses its objects, dependent on a particular 'implementation' of logic: such-and-such format of logical formulae, such-and-such proof calculus, etc.
Now, I can't remember the book I learnt about this, but I vaguely recall of a theorem proving the soundness and completeness theorems for first-order logic in a similarly trivial and elegant manner as well. If someone knows, please share.
For example, the proof of Lawvere's fixed-point lemma (in Category Theory) is pretty much a one-liner (thanks to the unifying conciseness of Category Theory) and has as its corollaries the theorems of Gödel (first), Tarski, Russell, Cantor, and Turing, among others.
Since it unifies all these theorems, one could argue that it is a more 'elegant/explanatory/essential' proof, as it gets to the heart of the matter of self-reference.
What then, is the point of going through tedious detail of proving those theorems in the traditional manner, given that they are not fully general, i.e., they don't 'transfer' easily to their similar brothers (as category-theoretic methods do)?
I think one could call the distinction between these two mathematical approaches as "Synthetic" vs "Analytic". Following Mike Shulman, synthetic mathematics (e.g. Euclidean geometry) has objects which are left undefined but whose behaviour are specified by axioms while analytic mathematics (e.g. Cartesian geometry) analyses its objects in terms of other objects from another theory.
In this scenario, the category-theoretic approach could be considered synthetic because as long as your domain satisfied the definition of a cartesian closed category, Lawvere's fixed-point lemma would hold. But the traditional approach would be analytic since it analyses its objects, dependent on a particular 'implementation' of logic: such-and-such format of logical formulae, such-and-such proof calculus, etc.
Now, I can't remember the book I learnt about this, but I vaguely recall of a theorem proving the soundness and completeness theorems for first-order logic in a similarly trivial and elegant manner as well. If someone knows, please share.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.