A list of puns related to "Propositional Formula"
So, I wanted to create a program that, given a logic formula by the user, for example ((Β¬A β§ B) β¨ C) β§ A, calculates its truth table. In this case the formula would be true if A=1, B=0, C=1, or if A=1, B=1, C=1, and it would be false in any other case.
But, I don't know how to create a method that can read the expression given, and then calculate all the possible outcomes.
I know at least that you can use a Parser to read a string and then translate it into an expression, but I'm having trouble understanding how to actually create one.
I heard that truth tables are used for simplifing the boolean expressions in an if-then command (https://old.reddit.com/r/compsci/comments/iqwxyh/six_years_as_a_professional_developer_and_yes_i/g4vckf4/), and Karnaugh map was also brought up in the discussion (https://old.reddit.com/r/compsci/comments/iqwxyh/six_years_as_a_professional_developer_and_yes_i/g4urk87/).
Wikipedia says:
> The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions.
In propositional logic, what are the inputs and outputs of K-map method? Specifically
can K-map simplify all formulas, or just some? What kinds of formulas can K-map simplify?
Does K-map guarantee that the resulted formula must be logically equivalent to the input formula?
what (normal) form does K-map simplify a formula into?
Does K-map guarantee that the resulted formula must be completely in terms of some complete set of connectives? (e.g. any of ${\neg, \lor}, {\neg, \land}, {\neg, \lor, \land},\dots$?
K-map is said to convert a formula into some minimal or least complicated form. What is the criterion for measuring complexity of a formula?
Is K-map covered in mathematical logic books? I have not been able to find K-map in logic books, but only in some digital circuit design books, which don't present it in a mathematically clear way.
Thanks.
Letβs consider the interpretation v where v(p) = F, v(q) = T, v(r) = T.
Does v satisfy the following propositional formulas?
(pβΒ¬q)β¨Β¬(rβ§q)
(Β¬pβ¨Β¬q)β(pβ¨Β¬r)
Β¬(Β¬p β Β¬q)β§r
Β¬(Β¬pβqβ§Β¬r)
Where F = false, T = true.
So (considering v itself doesn't show up in the formulas) this is the question after:
v[(pβΒ¬q)β¨Β¬(rβ§q)] = T or F?
v[(Β¬pβ¨Β¬q)β(pβ¨Β¬r) ] = T or F?
v[Β¬(Β¬p β Β¬q)β§r] = T or F?
v[Β¬(Β¬pβqβ§Β¬r)] = T or F?
Possibly using how the provided truth values are for the pieces out of which these larger statements are built? If so, I guess for each of the cases that'd be done by checking through all options for some corresponding truth tables? And v is a function defined for or applicable to any statements, like expressions like v(rβ§q) make any sense?
I wrote my first Prolog program less than 24 hours ago so I am super confused with this problem. I have to write a unary predicate that takes a term and checks if that term is a propositional boolean formula. I understand this to mean that the term is either true or false or is a formula with a propositional variable. I have absolutely no clue where to start but I think I have to match with boolean values?
One example is and(propositional(X), true) is a term representing propositional formula X and true. So I am guessing my predicate would be called by checker(and(propositional(X), true) ) and return yes?
If Ο β§ Ο is true then Ο β Ο is also true. Is that all I have to say? I don't fully understand what 'β’' means.
I'm currently planing and making some of the components of my project, so don't have any hard numbers on variable or clauses.
I'm planning to have large propositional formulas with maybe a billion variables and quite a lot of clauses. These will be read, and manipulated (have all instances of some variables replaced with other multi GB sized formulas) before being written to another file on the disk. This will result in very large files.
I know that algorithms like ESPRESSO can remove redundancy in cnf but, I don't want to turn them into cnf until I'm done with this manipulating, but I would still like to remove as much redundancy as possible and reduce the file size. Is there any algorithm that I can use to achieve this?
I'll be using buffered reader and read/manipulate parts of the formula before writing that to disk, before reading further into the file, So this algorithm needs to be able to work only looking at a moving window of the formula and not the global picture. This window can be a few hundred megabytes, but I would rather not have to have much more than that amount of the formula in ram at any one time.
This is my solution to exercise 6.7 from the book "Functional Programming using F#". If you don't know this book, you're missing out!
Since this is the first non trivial exercise I do, please have a look at my solution. I didn't test the code, but it looks correct :)
type Formula =
| Atom of string
| Neg of Formula
| Conj of Formula * Formula
| Disj of Formula * Formula
(* Transforms a formula in negation normal form (i.e. Neg is only applied to atoms).
This function uses the de Morgan laws
Neg(p ^ q) <=> Neg(p) V Neg(q)
Neg(p V q) <=> Neg(p) ^ Neg(q)
and the double negation law
Neg(Neg(p)) <=> p
*)
let rec negNormForm = function
| Atom _ as x -> x
| Neg(Conj(x,y)) -> Disj(negNormForm (Neg x), negNormForm (Neg y))
| Neg(Disj(x,y)) -> Conj(negNormForm (Neg x), negNormForm (Neg y))
| Neg(Neg(x)) -> negNormForm x
| Neg x -> Neg (negNormForm x)
| Conj(x,y) -> Conj(negNormForm x, negNormForm y)
| Disj(x,y) -> Disj(negNormForm x, negNormForm y)
(* Transforms a formula in conjunctive normal form (i.e. conjunction of disjunctions).
This function uses the distributive laws
p V (q ^ r) <=> (p V q) ^ (p V r)
(p ^ q) V r <=> (p V r) ^ (q V r)
*)
let conjNormForm =
let rec conjNormForm0 = function
| Disj(p,Conj(q,r)) -> Conj(conjNormForm0 (Disj(p,q)), conjNormForm0 (Disj(p,r)))
| Disj(Conj(p,q),r) -> Conj(conjNormForm0 (Disj(p,r)), conjNormForm0 (Disj(q,r)))
| Disj(p,q) as f -> match (conjNormForm0 p, conjNormForm0 q) with
| (_,Conj(_,_)) | (Conj(_,_),_) as g -> conjNormForm0 (Disj g)
| _ -> f
| Conj(p,q) -> Conj(conjNormForm0 p, conjNormForm0 q)
| f -> f
negNormForm >> conjNormForm0
(* Tells if a formula is a tautology (i.e. true for any assignment of truth values to the
atoms.
Note: This function uses conjNormForm.
*)
let isTautology =
// isTautologyDisj returns (isTautology?, Map<atomName, isAtomNegated>).
// The arg `atoms` is of type Map<string, bool> meaning Map<atom
... keep reading on reddit β‘So I am working on a project for my class where we had to code a sudoku solver (finished, works fine) and separately describe how propositional logic can be used to express three of the main rules of sudoku:
So lets take 'Every number appears in each row exactly once.'
Each Boolean variable Xr,c,d (where 1 β€ r, c, d β€ 9) represents the truth value of the equation Xrc = d (where d is equal to a digit 1-9).
Digit d appears at least once in row r: For row r: Xr,0,d ^ Xr,1,d ^ Xr,2,d ^β¦β¦ Xr,9,d
Digit d appears at most once in each pair of cells within row r. For all 9 positions, Xr,c ^ Xr,cβ : d: Β¬ Xr,c,d β¨ Β¬ Xr,cβ,d
Now to prevent a wall of text, obviously the other 2 conditions are structurally very similar to 'Every number appears in each row exactly once.' But as someone who is fairly new to propositional logic, what is the best way to go about combining all of these statements/clauses into one single formula? I just AND it all together or what?
My friend and I are trying to work out/remember if when working with formulas and trying to work out if they're tautologies or not, if you negate the formula before you do your truth tree. We have looked at multiple sources which say different things so we're not quite sure.
Here are the questions at hand.
Here is my working so far.
Also when deciding if it is a tautology or not, my understanding is that if all the branches are closed then it is a tautology. So the question on the left is a tautology and the one on the right is not (if my working is correct).
Could someone please give me an easy-to-understand definition for a propositional formula? In my Logic class today, I think I heard my prof say that if we have a variable such as A, we can have three formulas using only A, which are: 0, 1, and A. This doesn't really make sense to me or perhaps I misunderstood what he was trying to say. Thx.
Philosophy concerns evaluating arguments and comparing the plausibilities of theories, but without a way to determine the weight of a reason or the plausibility of a proposition, I struggle to see how that's even possible.
I encountered this term in Many-Valued Logics, by Grzegorz Malinowski, in the section The classical logic. It wasn't explained.
I know that every Boolean satisfiability problem or SAT(I will refer to it as this for now on) certificate can be verified in polynomial time. My question is if this (I.e. polynomial time verification) can be done for all certificates via converting the instance of the SAT problem to CNF then just mapping the corresponding T/F values in the certificate to the literals. I need a proof or reference for a proof to this.
Any proof to this would be sufficient, however to help you out you can just simply help me prove this lemma, which would complete my own proof attempt:
Lemma: EVERY propositional formula can be converted where the size of the formula only increases linearly into CNF where only satisfiability has to be preserved (equivalence need not to be preserved)? i.e. can we convert EVERY propositional formaula into an equisatisfiable cnf without exponentially exploding the size of the formula?
For your reference here is a useful Wikipedia article with some information about this topic: https://en.wikipedia.org/wiki/Conjunctive_normal_form
Game Title: Halo Infinite
Platforms:
Trailers:
Developer: 343 Industries
Publisher: Xbox Game Studios
Review Aggregator:
OpenCritic - 86 average - 94% recommended - 93 reviews
ACG - Jeremy Penter - Buy
>Video Review - Quote not available
AusGamers - KostaAndreadis - 8 / 10
>In the end though itβs hard to fault what 343 Industries has accomplished with Halo Infinite. Itβs very much the spiritual successor it purports to be -- with forward thinking design and elements that flow in a way that reminds you of the timeless nature of the fluid, stylish combat of old. The lack of co-op is something you feel, but in terms of cinematic spectacle this is the Master Chief carrying the flag once more for Xbox. Albeit in that new-school form of being able to jump in and, well, play anywhere.
CGMagazine - Khari Taylor - 9.5 / 10
>Unencumbered by the baggage of the upcoming story campaign, Halo Infinite Multiplayer is arguably the definitive incarnation of the franchiseβs online competitive component and is strong enough to stand on its own despite its F2P leanings.
COGconnected - Garrett Drake - 76 / 100
>Iβve shared many gripes I have with Halo Infinite. Iβve shared them meticulously because I love this franchi
... keep reading on reddit β‘I know this question looks trivial but I'm considering how to prove ((x=y)β§(y=z))β(x=z) from the axioms like: Equality's reflective property, substitution for functions and formulas; Axioms of propositional and predicate logic, And things like this, or is it a fact that it's an axiom on itself and can't be proved? but I have already proved the symmetric property of equality.
btw Merry Christmas everyone! Or Merry early Christmas or late Christmas for different time zones.
Disclaimer: Im not mathematic, i do not looked at source of how voting power is calculated and i'm not from english speaking country.
Biggest FLAW in EOS is VOTING. We saw that yesterday (depending on your timezone) When EosStoreBest crewed up and MANY people removed their vote for this clown BP and 1 whale voted for them and 1 whale TOTALLY reckt all people's votes against that BP.
So looks like it's STUPID (and was done intentionally) to calculate vote power based on EOS you own. EOS should change formula and make sure that 1000 individual voters who have 1 EOS is MORE powerful then 1 account holding 1000 EOS.
Voting power should be calculated based on amount of voters, not based on EOS balance. At least not that much as it is now. Bitfinex and few others rule all EOS now just because have a lot of EOS. Also a lot of EOS also allows to get even more EOS, so looks like this EOS platform is a pyramid scheme where the higher in hierarchy you are -the more you get.
As title says, what are your favourite maths-related quotes? Here are six of my favourites:
βI have discovered a truly remarkable proof of this theorem which this margin is too small to contain.β - Pierre de Fermat, c. 1640, on what became Fermatβs Last Theorem.
[After proving 1+1=2 in the language of formal mathematical logic] βThe above proposition is occasionally useful.β - Principia Mathematica, 1910.
βIn the fall of 1972, President Nixon announced that the rate of increase of inflation was decreasing. This was the first time a sitting president used the third derivative to advance his case for re-election.β - Hugo Rossi, 1996.
βDo not disturb my circles.β - Archimedes (possibly apocryphal), c. 212 BCE.
βOh, that? They grow in my garden.β - Roger ApΓ¨ry, 1979, when asked how he had developed a formula which helped show that the Riemann Zeta function evaluated at 3 was irrational.
βYou know, for a mathematician, he did not have enough imagination. But he has become a poet and now he is fine.β - David Hilbert, on a student of his who dropped mathematics to study poetry.
EDIT: Evidently people liked this. I hope, like I have, people have discovered new maths quotes that resonate with them! Hereβs one more from me, which one of my teachers said in response to a question I had one day.
Me: βThis is fine and all, but why are we doing this? What is the use?β
Teacher: βWhy are we doing this? Because itβs there and because itβs nice.β
If anyone is able to help out, I have a question related to interpreting the following passage:
βIn some formulations of propositional logic, one uses βt,β βfβ as symbols of the object language itself; these symbols are then called propositional constants. And a Boolean valuation is redefined by adding the condition that t must be given the value truth and f falsehood. [Thus, e.g. t by itself is a tautology; f is unsatisfiable; X β t is a tautology (where X is a formula); f β X is a tautology. Also, under any Boolean valuation t β Y has the same truth value as Y; X β f has the opposite value to X. Thus t β Y is a tautology iff Y is a tautology; X β f is a tautology iff X is unsatisfiable].β
If Iβm understanding this correctly, it seems to make sense to me that t by itself would be a tautology, as t would presumably be given the value truth in all evaluations, while likewise f is unsatisfiable because it is never given the value truth, but Iβm not understand why it would be a tautology that X implies t or that f implies X. If X, as a formula, is either a proposition or a combination of a proposition(s) and logical function(s), is this to say that X itself is t, meaning that if Boolean values were assigned, X would have the value of truth? The I donβt see why this would be the case, as there can obviously be formulas that wouldnβt have the value of truth given to them. For example, if we were to say X is βP β§ ~P,β presumably the fact that weβre talking about X doesnβt imply that X is true (or would be true if we were to assign Boolean values). This doesnβt seem to be whatβs meant, but I donβt see what else would be meant by the statement that X implies t. If anyone can point me in the right direction to understanding this, that would be highly appreciated. Thanks.
Hi everyone! I know there are a lot of year-end retrospectives flying around the sub and in that sense this is just another drop in the bucket, but I wanted for my own sake to create a summary of my gaming year so I could keep my thoughts organized and have a more permanent reference point back in the future. With that in mind feel free to disregard this post if it adds nothing new, but of course I welcome any and all discussion as well!
First, some background: I keep pretty meticulous track of what games I play and especially games I've finished. I started a list perhaps 15 years ago that has only continued to grow in both size and detail. While that means that for quite some time my completed gaming efforts have been well-organized, it was only around 2019 that I made a point of organizing my upcoming gaming as well, confronted as I was by a frankly intimidatingly-sized backlog. I've found that, more than any other factor, this simple act of organizing/planning my backlog has enabled me to really start taking chunks of it away. How effective has it been?
In 2019, I set a new personal record, completing 69 total games.
In 2020, I completed a respectable 45 total games, despite welcoming a newborn and spending hundreds of hours on a massive creative writing project.
In 2021, with fewer restraints on my time and even better organizational methods, I shattered my previous record and completed 94 games.
This brings my lifelong total up to 606 completed games and counting. While my backlog remains large, the number of games I'm actually really excited about playing has shrunk significantly, so I'd expect the pace to slow a bit for this year and beyond - but we'll see!
Without further ado, here's the list, presented in chronological completion order, along with my personal ratings for each game. Unfortunately two of these games are too recent to be included by name here, so please pardon the redactions where relevant.
Number | Game | Platform | Completion Date | Score (Out of 10) |
---|---|---|---|---|
1 | Exit the Gungeon | Switch | January 5 | 8.5 |
2 | Picross e3 | 3DS | January 14 | 7 |
3 | The Witcher 2: Assassins of Kings | PC | January 15 | 8 |
4 | Unreal II: The Awakening | PC | January 21 | 4.5 |
5 | Fez | PC | January 25 | 6 |
6 | Shovel Knight: King of Cards | Wii U | January 28 | 7.5 |
7 | Middle-Earth: Shadow of War | PS4 | January 29 | 5.5 |
8 | Tomb Raider: The Last Revelation | PC | February 1 | 6 |
9 | Picross e4 | 3DS | February 7 | 7.5 |
10 | Dandara | PC | February 8 | 7 |
11 | McPixel | PC | February 9 | 2 |
12 | Offspring Fling |
How is this not saying, 'S* is satisfiable if you can make any propositional variable anywhere true.'?
http://imgur.com/c2fIny7
I imagine it has something to do with the fact that the big unions are not disjunctions, like I had been imagining they were, but I can't work out what they are instead.
Thanks for the help.
Iβll start off addressing the elephant in the room. I donβt have insider information, most of what I have pieced together was accomplished by classic DD. u/Worth_Notice3538 got a copy of the Informed Consent. There's a bit to unpack, I'll do the most important stuff first.
Chances of receiving bucillamine
The enrollment ratio has changed from 2:1 to 1:1. This was a good move to preserve statistical power after our unexpectedly good results in the first interim analysis forced us to pick a single dosage at the 400 interim analysis. This change should have been communicated.
The revision date coincides with the addition of viral load testing
Advarra is one of the two most popular commercial IRB's to use, so no surprise there. The latest revision to the informed consent was August 10th, 2021. That helps us establish a timeline for exactly when the viral load testing was implemented. Based on the dates, a few of the 600 interim update patients likely had viral load testing. So we will be around 200 viral load results at the 800 mark. If we get unblinded, that should be enough to tell where we are on the antiviral effect.
Thanks to the efforts of u/EggPotential109, we know that only unvaccinated patients are being enrolled. It also sounds like sites are taking that "at least 2 symptoms" criteria seriously and are aiming for a patient profile more likely to progress to the hospital, based on clinical presentation.
The US CDC estimates that, since the start of the pandemic, there have been 124 million symptomatic COVID cases and 7.5 million hospitalizations. Since the CDC also tracks vaccination status, we can be reasonably sure that this overall 6% hospitalization rate is a good estimate for the unv
... keep reading on reddit β‘Can propositional logic be formally defined as a pair (L, β’) where L is a propositional language (alphabet plus wff rules) and β’ is a consequence relation (reflexivity, monotonicity and transitivity)?
Consequence relation is often said to be part of metalogic which generates confusion as to whether count it as part of propositional logic or not.
(TLDR at the bottom)
I posted my previous thesis about half a year ago, and it was very well received by many people and is still being used as a reference today. But there is a problem, it doesnβt reflect my current views on where I think Rocket Pool will be in the future. I originally thought that it would be more appropriate if I gave my extremely conservative views on it (I lowered expectations and numbers) for a few reasons. It was a project that hadnβt launched yet when other staking services were just about going live back in December of 2020, and it was still being tested on the testnets. When I first posted my original thesis, I definitely put in a few conservative numbers to take it somewhat seriously.
In this updated thesis, Iβll be giving everyone my full expectation of Rocket Pool and where I think itβll go in the future (no holding back on this one). In this βupdatedβ thesis, Iβll be going over a ton of topics, so feel free to skip to what interests you. :) Some of the information will be a repeat of the first post (things that I think should be common knowledge when interacting with the Rocket Pool protocol), and others will be new, like the smoothing pool or staking yield arbitrage. Thereβs a lot to cover so letβs get started.
This is a derivative of ETH that really is an extremely pristine asset. It combines the deposit of ETH as well as the staking yield in its valuation. For the average hodler of ETH, theyβll have to decide if holding ETH is a better option than buying rETH, which will likely incur a taxable gain/loss depending on your tax jurisdiction. rETH is also a better form of collateral to be used in defi. When taking loans against ETH to borrow USDC or other assets it can be a little nerve racking since there's the possibility of getting liquidated. With rETH, defi users will be a little bit safer since the value of collateral will continue to rise compared to using plain old ETH.
Something I hear from time to time is people saying, βonce you stake with a CEX or a SaaS provider, you wonβt be able to withdraw those funds,β and itβs true, but Rocket Pool is completely different because rETH exists. Some staking protocols have used a similar concept where theyβll issue a staking derivative and do some defi magic. As a result, there will be a soft peg for those tokens. Which brings up the question of what happens when that peg is broken? The result is a discount on the derivative, but currently, I donβt t
... keep reading on reddit β‘PlayStationβs appeal, especially during the PS4 generation, came from its exclusive games. Even though some second-party titles found a new home on other platforms, games from Sonyβs in-house PlayStation Studios family remained tied down to the console. But towards the end of the PS4's life-cycle, Sony moved ground when it started porting some of the consoleβs first-party exclusives over to PC.
The Beginning
Guerrilla Gamesβ 2017 open-world adventure Horizon Zero Dawn, ported over to PC in 2020, was the first first-party game to make the big switch. This was a watershed moment for a publisher that historically kept its gamesβ--its best assetsβ--close to its chest and exclusively available on PlayStation machinery.
In early 2021, it was announced that Bend Studioβs 2019 open-world biker Days Gone would be making its way to PC. On the 17th of April 2021, the game launched on Steam and the Epic Games Store. With almost 30,000 concurrent players over its launch weekend and the top spot in that weekβs Steam sales charts, Sony had a successful formula on its hand: release a game on console, squeeze as much out of it as possible in terms of sales, and then drop it on PC a few years later.
If Sonyβs intent of bringing its in-house developed games to PC going forward wasn't clear enough, then one shrewd move in the summer of 2021 made it crystal. Just a few months ago (July 2021), Sony acquired PC port and tech specialist Nixxes Software. The Dutch studio has previously worked on several PC ports for publisher SquareEnix, including the recent Tomb Raider reboots, 2017βs Deus Ex: Mankind Divided, and 2020βs Marvelβs Avengers. This couldn't be a clearer signal of intent from the mega-corporation, who have now outright acquired a studio known for its work on PC titles, and one that is expected to continue with this field of work moving forward.
This push forward with PC releases looks to continue into 2021. Ever since PlayStation announced its intentions to bring Horizon Zero Dawn to PC, fans have been clamouring for 2018βs God of War to receive the same treatment. It makes sense considering that this was the next first-party game to release after Horizon. And just like Aloy, the next chapter in Kratos and Atreusβ adventure is scheduled to come out in 2022, allowing Sony to use this game to draw a PC audience into purchasing a console in preparation for Ragnarokβs release. God of War (2018) was announced for PC back in October and is
... keep reading on reddit β‘While there has been an increase in skeptical pop-psychologists debunking pseudoscience online, this has unfortunately highlighted an issue with a lack of self-awareness of issues within their own field(s).
While the rift between the soft-sciences and hard-sciences appears to be closing, there still remains a gaping hole in soft-science as a whole: the complete failure of epistemic soundness.
This has lead to a number of semi-professional skeptics elevating their own pseudo-science (through the use of formal fallacies) by knocking down obvious examples of pop-pseudoscience, while continuing to forward fallacious reasoning, often through unproven concept-linkage and falsely equating unreplicated preliminary studies with double-blind, independently replicated, meta-analyzed scientific studies. There is also the issue of academic-oriented (woozle effect) examples that also permeate through social media by these skeptics (linguistic programming, narcissism fearmongering, exaggerated serotonin claims, etc).
In the past I've pointed out common examples of this and have even gone so far as to use cross-comparative analysis to break down arguments and show exactly where the common unsound jumps in logic are in many (psychological) assertions/arguments. I've noticed that instead of skeptical practitioners taking this seriously, they often resort to childish tactics such as name-calling, ad hominem attacks, sealioning, calls for mobbing, calling for credential measurement tests, and so on.
If anything I would like people to understand several key aspects that separate hard-science from soft-science in this regard, simply because it's very tiresome to spend time and energy trying to correct people unwilling to even consider the possibility that their intuition, assumptions, experience and even education can be missing critical components of soundness:
Do your worst!
I'm interested in a question about the sociology of the pedagogy of logic.
I take it that, almost always, when one first learns proofs in propositional logic, one learns a natural deduction system (as opposed to, say, a sequent calculus or axiomatic system). And usually, I think, the proof notation that one learns is Fitch-style rather than Gentzen-style. But it seems to me that there is some variance in specific introduction and elimination rules that one learns. I imagine the most variance is with regard to rules for disjunction.
There seems to be two widely used candidates for disjunction rules:
Argument from Cases: If one has (X V Y) and one can prove some formula Z from a hypothetical assumption of X, and one can also prove Z from a hypothetical assumption of Y, then one can conclude Z.
Disjunctive Syllogism: If one has (X V Z) and ~X, one can conclude Z. Alternately, if one has (X V Z) and one has ~Z, one can conclude X.
Which of these were included in the first systems of rules you learned? Or, if you've taught intro logic, which of these rules did you teach in the first system you taught? Or if there are some other disjunction rules you learned/taught, what are they?
One of the questions/concerns I see consistently with Amp is the question of "Why does Flexa not market Amp token?" We see big announcements from Flexa regarding their payments network and they often leave Amp out of the public-facing equation. Here's my perspective on on why Flexa does not market Amp, and does not need to market Amp for the token to succeed in fulfilling its purpose.
What is Amp?
First you have to understand what Amp is. Amp is basically an open source collateral token - software that runs on the Ethereum network that can also maintain asset value and act as collateral to secure digital transactions. The founders of Flexa designed Amp as a piece of Flexa's business model. Flexa uses Amp on the back-end of their payments network to offer merchants (their customers, the ones they market to) extremely good transaction rates (several times better than traditional payment processors), instant transaction finalty, and fraud prevention. Flexa also uses Amp to offer spenders (also Flexa's customers, the ones they market to) a novel platform to spend (currently almost 50) digital assets at merchants around the world, instantly and privately. The goal is to enable spending at big merchants, small merchants, medium merchants, online merchants, and in as many countries as possible. When Flexa's customers spend digital assets through Flexa's network, Flexa purchases Amp token from the open market and rewards stakers a fee for collateralizing the transaction, cutting out the fatty middle-men, and guaranteeing against fraud for the merchant.
In order to offer these things, Flexa need not waste time marketing its back-end software. Flexa is much better off marketing its spending options, features, and benefits so that merchants will want to adopt Flexa as a payment rail, and so that spenders will want to spend with Flexa.
Put simply, Amp is a piece of software that Flexa uses behind the scenes. Now stop and ask yourself, how often do you see other payments networks marketing the software they use? How often do you see payments networks market their software when they make major announcements? Basically never.
But marketing Amp will help increase its value and benefit Flexa, right?
Flexa only really needs minimum collateral value to ensure instant throughput on its network. There is no reason for Flexa to drive the value of Amp token sky-high in order to extremely over-collateralize its potential near-term throughput. Flexa can (and does cur
... keep reading on reddit β‘In math there's a proof technique called mathematical induction that proceeds as follows.
Suppose you have propositions
Suppose one proposition is true, and all of the propositions can be ordered so that the first one is the known true proposition, the first implies the second the second implies the third etc etc.
Then all of the propositions are true.
Is there something similar to this taught in formal logic in philosophy? How is this concept used in philosophy if so?
Edit:I said possibly infinitely many previously but that was not entirely true. It is true for any number of propositions.
I heard that truth tables are used for simplifing the boolean expressions in an if-then command (https://old.reddit.com/r/compsci/comments/iqwxyh/six_years_as_a_professional_developer_and_yes_i/g4vckf4/), and Karnaugh map was also brought up in the discussion (https://old.reddit.com/r/compsci/comments/iqwxyh/six_years_as_a_professional_developer_and_yes_i/g4urk87/).
Wikipedia says:
> The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions.
I read the article and searched around, but am still not clear about what logical problem K-map solves.
In propositional logic, what are the inputs and outputs of K-map method? Specifically
can K-map simplify all formulas, or just some? What kinds of formulas can K-map simplify?
Does K-map guarantee that the resulted formula must be logically equivalent to the input formula?
what (normal) form does K-map simplify a formula into?
Does K-map guarantee that the resulted formula must be completely in terms of some complete set of connectives? (e.g. any of ${\neg, \lor}, {\neg, \land}, {\neg, \lor, \land},\dots$?
K-map is said to convert a formula into some minimal or least complicated form. What is the criterion for measuring complexity of a formula?
Is K-map covered in mathematical logic books? I have not been able to find K-map in logic books, but only in some digital circuit design books, which don't present it in a mathematically clear way.
Thanks.
I'm interested in a question about the sociology of the pedagogy of logic.
I take it that, almost always, when one first learns proofs in propositional logic, one learns a natural deduction system (as opposed to, say, a sequent calculus or axiomatic system). And usually, I think, the proof notation that one learns is Fitch-style rather than Gentzen-style. But it seems to me that there is some variance in specific introduction and elimination rules that one learns. I imagine the most variance is with regard to rules for disjunction.
There seems to be two widely used candidates for disjunction rules:
Argument from Cases: If one has (X V Y) and one can prove some formula Z from a hypothetical assumption of X, and one can also prove Z from a hypothetical assumption of Y, then one can conclude Z.
Disjunctive Syllogism: If one has (X V Z) and ~X, one can conclude Z. Alternately, if one has (X V Z) and one has ~Z, one can conclude X.
Which of these were included in the first systems of rules you learned? Or, if you've taught intro logic, which of these rules did you teach in the first system you taught? Or if there are some other disjunction rules you learned/taught, what are they?
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.