A list of puns related to "Propositional calculus"
Does anyone know of a GroupMe for this class?
There is a basic exercise: prove (Β¬)P, (Β¬)Q β’ (Β¬)(P operator Q) (where "operator" is β¨, β§ or β and some of the negations can be omitted) using axioms of propositional calculus and MP. I can do it for any operator and negation combinations, but the one in the title. Can you give me any hints?
I'm having a really hard time with wrapping my head around Predicate Logic. I'm having an especially harder time with Uniqueness.
Need to translate below two statements to Predicate logic using these two functions:
R(x): person x is in this room
O(x,y): person x owns property in state y
(The two statements are independent of each other)
My attempt:
βx(R(x) β§ Β¬O(x,Georgia))
My attempt:
βx(R(x) β§ βzβy(Β¬O(z,y)β z=x))
I feel like I'm missing something in part 2.
I encountered this term in Many-Valued Logics, by Grzegorz Malinowski, in the section The classical logic. It wasn't explained.
Hello all. I've found myself studying logic thanks to philosophy. I studied philosophy in college and fell in love with the idea of analyzing language in order to solve philosophical problems. I was reintroduced to Frege and then Russell after graduating and I started studying logic on my own using various textbooks. I didn't do the best with symbolic logic in college (I made a C in the logic class I had to take) due to not being mathematically inclined. Years later however, I've found that studying it because I want to has proven to be far more interesting.
I've been using Harry Gensler's book as well as revisiting my copy of Patrick Hurley's introduction. I pretty much have a decent grasp on propositional logic and predicate logic. I can do proofs without too much struggle now as many of the inference rules have become internalized. I've even dipped my toes into exploring modal logic....but only the basic, basic stuff as things get confusing fast for me.
My question is, how do I USE logic? I know how to solve problems given to me in a textbook, but surely there's more to it than that? I'm new at this and I can't claim to have mastered anything, but exactly what can a person like me, a philosophically inclined person with no interest in programming or computer science, do with logic on a regular basis? Even if I continue on with logic for its own sake, how can I move past just solving proofs given to me in textbooks?
Hi, while reading myself into the propositional calculus and also predicate logic and I wonder when to use lower case letters like p,q -> r or uppercase letters like A, B, Q,, S. When for example modus ponens is explained, the lower case letters are often used, in Truth Tables the uppercase letters, sometimes even mixed. Am I free which to use or are there special conventions or semantical differences Many thanks in advance.
Hey everybody, I've been reading about the history of logic lately and I've reading a lot about the developments in the 19th century. Frege is obviously a major contributor by investing modern predicate logic and being the grandfather of analytic philosophy, but I was reading about his contribution to propositional logic as well. Namely, his logical system consisting of six axioms and only one rule of inference that he proved was equivalent to the standard propositional calculus of the time. Those axioms are listed here: https://en.wikipedia.org/wiki/Frege%27s_propositional_calculus.
Only problem is that, in my research, I can't find 1) what the standard axioms are (I'm assuming that they're things like material implication, De Morgan's Law, etc.) and 2) which of Frege's axioms/theorems correspond to them. I mean, what axiom is A->(B->A) equivalent to?
I also know that Hilbert further streamlined this system down to four axioms, but even including that in my research I couldn't find any comparisons between it and standard propositional calculus. Anyone know?
Hello everyone,
I've written a small tool that derives any given tautology in propositional calculus from some axioms and modus ponens and thought that some of you could take an interest in it. You can find it here:
http://formalproofmachine.appspot.com/
If you encounter any bugs or spelling errors, I'd be pleased, if you contact me or answer this thread.
Thanks in advance!
Hello!
There's an exercise which asks to prove the inter. thrm but such that if Ο|=Ο there exists a Ξ³ such that Ο|= Ξ³|= Ο but not Ο|=Ξ³ (1) and not Ξ³|=Ο (2). (the addition to the inter thrm are the (1), (2). ) Since the proof of the theorem is constructive one could use the formula given usually denoted Ο*. I can't do neither 1 or 2. Some help? thanks in advance.
Hi! So I recently started to study coding and I came across boolean functions, that has a true or false value. As I understand it within logic, questions was trying to be boiled down to either true or false to be able to solve or make problems easier. I am interested in if there is any books about how "real-world problems" are made into logic true of false values, connected to computer science and coding but in a more philosophical point of view. I know there are many different topics, but I just want to make computer science and coding a bit more philosophical.
For example, can there be a calculus without a negation sign? How many connectives are needed for a propositional calculus?
Let's say you have this statement:
((A β¨ Β¬ A) β¨ (B β¨ Β¬ B)) => ...................
Is there a way I could simplify it to just say: Tautology => ..........
What I'm asking is can you put some form of sign that signifies Tautology in propositional calculus statement ?
Thanks
Hi all, I'm reading through GEB for the first time and today I came across the propositional calculus and something baffled me. Right on the forelast page he proves that <P^~P> implies that any arbitrary statement Q can follow. So far so good, but what I do not understand is the way he uses the fantasy rule in order to prove this statement. If I slightly rewrite what he did I can break the main issue down on the following:
P premise
[ push
Q premise
P carry-over
] pop
<Q->P> fantasy
This looks like any random Q could imply P and yet it seems to me that this must be wrong (at least my intuition tells me so), since I can think of many arbitrary atoms Q that to not necessarily imply a given atom P. I have strictly used the proposed rules in order to derive the above.
Can you help me out?
Whenever I've read up on math I've always run into discussions of this type of calculus or that type, with no real explanation of what they really mean by the term calculus. For example, propositional calculus, predicate calculus, lambda calculus, etc.
Note I'm not asking for what each of the above are, but how one goes about calling something "a calculus" and how I can learn more about that.
I found the below statement in the propositional calculus page on wikipedia:
> In general terms, a calculus is a formal system that consists of a set of syntactic expressions (well-formed formulas), a distinguished subset of these expressions (axioms), plus a set of formal rules that define a specific binary relation, intended to be interpreted to be logical equivalence, on the space of expressions.
But it is very brief and does not link to anywhere that more fully describes this concept as a whole.
Where can I learn more about this use of the term "calculus" and how these types of systems are constructed?
For background I'm self-studying logic/proof techniques and I'm interested in the foundations upon which these systems are based, how they are constructed and build on each other, etc. Specifically finding that predicate logic extends propositional logic, which is what led me to read up on this connection more, which then led to this question. I'm also reading volume 2 of Kline's history of math, so it may be discussed in there but I literally just started reading it.
Thanks.
GΓΆdel, Escher, Bach: An Eternal Golden Braid
This is a discussion of the themes and questions concerning the Chapter 7: The Propositional Calculus, and its dialogue, Crab Canon.
Logical Rules
RULE OF JOINING: If x and y are theorems of the system, then so is the string <xβy>.
FORMATION RULES: If x and y are well-formed, then the following four strings are also well-formed:
(1) ~x = not x
(2) <xβy> = x implies y
(3) <xβy> = x and y
(4) <xVy> = x or y
RULE OF SEPARATION: If <xβy> is a theorem, then both x and y are theorems.
DOUBLE-TILDE RULE: The string '~~' can be deleted from any theorem. It can also be inserted into any theorem, provided that the resulting string is itself well-formed.
FANTASY RULE: If x were a theorem, then y would be a theorem. (Note that y has to be a well-formed string created from some derivation of x)
CARRY-OVER RULE: Inside a fantasy, any theorem from the reality one level higher can be brought in and used.
RULE OF DETACHMENT: If x and <xβy> are both theorems, then y is a theorem.
CONTRAPOSITIVE RULE: <xβy> and <~yβ~x> are interchangeable.
DE MORGAN'S RULE: <~xβ~y> and ~<xVy> are interchangeable.
SWITCHEROO RULE: <xVy> and <~xβy> are interchangeable.
All the above rules are used in most logical systems, albeit with different names.
How do we know the system is consistent or not? Hofstadter says that any such proof would require a stronger system than the Propositional Calculus and cannot be proven from within the system. Do you agree or not? Does it make sense to ask if the Propositional Calculus is complete considering the fact that it doesnβt have any axioms, only the FANTASY RULE?
Hofstadter briefly talks about formalizing a system of meta-theorems like the Propositional Calculus is a formalization of theorems. Yet there is the obvious problem of always needing another level to talk about the top-most level. How can this be resolved? Hofstadterβs comment that βa theory of reasoning could be identical to its own meta-theoryβ is an interesting idea because if the system can make statements about itself, then the meta-theory is part of the system. For example, if we extend the Propositional Calculus to include x and y as part of a well-formed string, then the Propositional Calculus can talk about itself. By taking <xβy>, and letting x = <xβy> and y = <xβy>. The resulting string is <<xβy>β<xβy>
... keep reading on reddit β‘http://www.decision-procedures.org/handouts/Tseitin70.pdf
Maybe I just don't have +Fravia skills but it took me forever to find this seminal paper that is cited in many of the program analysis papers I've come across. I went so far as to e-mail Daniel Kroening only to have someone in the ##re channel find a link that re-directed to Kroening's website for his book (For a rather brief one-line review of his book by rolfr, click http://www.reddit.com/r/ReverseEngineering/comments/qk096/video_semiautomated_input_crafting_by_symbolic/c3ygvnp).
Have fun with this paper and hope it gets mirrored a bit more so people can actually find it in the future.
Hey thanks for taking the time to read this I am currently studying maths at university and was wondering if anyone could help me with the following theorem's. I know and kinda understand the theorems and I know how to use them, I just do not fully understand where they came from or how they are derived.
(a) (Pβ§ βΌ Q) β R if and only if P β (Q β¨ R). (b) (P β§ Q) β R if and only if P β (Q β R). (c) (P β Q) β§ (P β R) if and only if P β (Q β§ R). (d) (Q β P) β§ (R β P) if and only if (Q β¨ R) β P.
thanks.
Hi guys,
I do not want to be too specific as I want to crack this nut myself, but I was wondering whether you have any hints on how to encode some set S of elements which can be dependent on each other (given by function S -> P(S)) and in conflict with each other (given by symmetric relation which is subset of SxS) and then for given e from S decide whether there is subset U of S such that e is from U, any x from U depends only on ys from U and no x, y from U are in conflict with each other.
Is there any literature (or better yet online resource) for novices such as myself you could recommend to help me attack this problem?
I'm just wondering where the brackets would go with something like this. I'm not too sure what the subsets precedence is.
I think that it would be less than intersect, and that the brackets would be (to make clear order of evaluation);
(A intersect B) subset C
rather than
A intersect (B subset C)
My reasoning for this is that If I have something like B subset C
it kinda
evaluates the same amount of 'truths' (three) as an OR.
But this is the crappest reasoning ever perhaps, hence the post.
Thanks!
I'm in my second year of university and this is the second level of this class that's been mostly propositional calculus. It's required computer science course, not math or anything. I have yet to take an algorithms course, but have taken two semesters of a class focuses propositional calculus.
I can see how one class on it would be useful, with critical thinking and all, but the second course goes so in depth that I feel like it's so unnecessary, yeah I know it's useful for circuit design but do we really design circuits in computer science? I feel like we are learning this just to solve harder propositional calculus problems with little real world applications.
Whenever I go online to learn more about programming or online computer science course, I never see propositional calculus, but in university they treat it like its so important to learn, more than algorithms apparently. Is this normal?
Shouldn't more focus be put on other things? Or how useful is propositional calculus in computer science other than to understand higher level propositional calculus?
Hey everybody, I've been reading about the history of logic lately and I've reading a lot about the developments in the 19th century. Frege is obviously a major contributor by investing modern predicate logic and being the grandfather of analytic philosophy, but I was reading about his contribution to propositional logic as well. Namely, his logical system consisting of six axioms and only one rule of inference that he proved was equivalent to the standard propositional calculus of the time. Those axioms are listed here: https://en.wikipedia.org/wiki/Frege%27s_propositional_calculus.
Only problem is that, in my research, I can't find 1) what the standard axioms are (I'm assuming that they're things like material implication, De Morgan's Law, etc.) and 2) which of Frege's axioms/theorems correspond to them. I mean, what axiom is A->(B->A) equivalent to?
I also know that Hilbert further streamlined this system down to four axioms, but even including that in my research I couldn't find any comparisons between it and standard propositional calculus. Anyone know?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.