A list of puns related to "Method of moments (probability theory)"
Could John B have stored the gold in the form of Chloroauric acid (gold chloride)? Short version: This is basically gold in a liquid solution. The gold can be extracted form the solution to be the solid element gold (Au).
This would have been a ridiculously clever way for John to store some or all of the gold, would it not? If there were buckets of unlabeled liquids in John's clock shop, perhaps they were gold storage, until John needed to recover gold when he needed money.
I am just throwing this out there as an idea that popped in to my head. If this idea isn't scientifically possible please feel free to explain why.
Gold extraction video: https://www.youtube.com/watch?v=onT6u0xjIe4
Hello everyone!
I'm a statistician and I really love cards game in general. I would like to approach the beautiful game of poker with a /statistical mathematical approach, is there any manual about it?
In general, what are the best manuals for poker? Is there a RELIABLE list with the top/best manuals ?
Thanks everyone!
So Iβm currently playing Tenyi giant ballpark. That requires me to run 6 normal insect monsters in my deck that I donβt necessarily want to draw into. Link spider helps that a bit, but it takes away my normal summon. occasionally I will open with 3 normal insect monsters and if that happens and I donβt draw giant ballpark I always lose. This situation where I draw multiple normal insect monsters comes up in about 20% of my matches. I was wondering if running a 45 card deck instead of 40 would help that. Are any of yβall on this situation as well ?
The simulation theory in a nutshell:
Over the past few decades, video games, CGI, graphics, etc. (simulations) have gotten exponentially better. Think about the first video game you played when you were a kid, and what is available now.
Assuming humanity doesnβt destroy itself, technology will keep on advancing, and these simulations will continue to get better.
Eventually, these simulations will get so good they will become indistinguishable from reality.
Countless simulations will be running, leaving it statistically more likely that weβre in one of these simulations, rather than in βtrueβ reality.
I am planning on taking an honours-level measure theory course next semester as I have been really interested in fields of mathematics that involve geometry and calculus. However, when I tried to search for more information a lot of people seem to discuss measure theory in relation to probability theory, which is a field that I am not particularly interested in.
Fields that I would be interested in would be something like differential geometry, partial differential equations, functional analysis, and complex analysis. So far, I have taken a course in real analysis, elementary number theory, abstract algebra and linear algebra.
I have recently begun reading an introductory book on Quantum Physics that explains the major concepts without diving deep into calculations and problems.
After reading about the Quantum Zeno Effect, particularly it's application in interference-free measurements, I found myself struggling to grasp how the Zeno Effect can coexist with basic probability theory. Maybe the book provides a less-than-ideal explanation of the effect, but I am not certain, so I came here for help.
The book describes this situation: two perfectly reflective mirrors face each other; a third, double-sided, imperfect mirror sits between them (an imperfect mirror is one that has a small chance of letting a photon through it's surface instead of reflecting it). A photon is shot in the left side of this setup, where it bounces back and forth between the leftmost mirror and the central mirror until some point when it passes through the central mirror and begins rebounding in the right half of the setup.
Then, the author describes a situation where an object exists in the right half of the setup that will absorb the photon if it ever crosses the central mirror. Thus, because the photon's stateβexisting in the left half or right half of the setupβis known after each of the particle's reflections off of the central mirror, it will never pass over to the right half. The author describes this situation to introduce an method of interaction-free measurement.
However, since the probability of the photon passing through the central mirror is independent of previous eventsβjust as a coin flip is independent of previous coin flipsβwhy would measuring it's position force it to remain in the left half of the setup? It doesn't need to reflect off the mirror, say, ninety-nine times before it passes through on the one-hundredth, so I find it impossible for measurement to affect the photon's state.
Could somebody please explain how the Quantum Zeno Effect reconciles itself with the laws of probability? Like I said earlier, the book I am reading may simply fail to properly explain the Effect, but I thought this subreddit might be able to assist me either way. Thank you!
TL;DR at the bottom
Scientific proof in the comments (as automod doesn't allow me to use links here)
Let me show you a problem in probability theory that most people find very counter intuitive: The Bayesian Inference.
Imagine you're in a town that has two cab companies, the green cabs and the blue cabs. 75% of the cabs are blue, 25% are green. Then at nighttime, a cab is involved in a hit-and-run accident, but there is a witness that states it was a green cab. Later, the police runs tests on him and finds out he gets 80% of the Colours right in those conditions. What are the chances that it actually was a green cab? Most people here say its between 70 and 80 percent. This isn't right however and is commonly referred to as the Base-Rate-Fallacy
Now take it this way: Out of 1000 cabs, 750 are blue and 250 are green. Out of 750 blue cabs, the witness identifies 600 as blue and 150 as green. Out of 250 green cabs, the witness identifies 200 as green and 50 as blue.
Now it is not difficult to tell the chances: 200 out of 350 cabs that have been identified as green are actually green. Which is about 57%.
This information type is simply easier to use for our brain because it is more intuitive and doesnt require a lot of calculation.
As you can see, this "trick" is not only useful for calculating in stochastics, but not knowing it can lead to false conclusions, in legal cases (like this example) or medical cases (for example tests for rare illnesses, which have to be taken multiple times to be sure someone is infected)
TL;DR don't use percentages when calculating probabilities but rather use natural frequencies. Instead of 10% say 10 out of 100. Your brain can handle this info much more Easily and smoothly
Almost all "games of chance" which are frequently played at casinos are known to have a "house edge", ie: in completely fair play, it is known that the casino itself will win more often than lose, thanks to the rules of the game, the odds of various things occurring, etc.
These days, we know all of this with certainty, based on knowing the probability of cards being drawn both within a single hand and over the course of many sessions, things like game theory which can be used to determine the outcome of all potential situations, even when everyone involved knows the optimal strategy, etc.
But organised gambling has existed for hundreds if not thousands of years. How did they maintain a "house edge" before the development of modern understandings of probability?
I've been visiting my parents for the last two weeks and started biking around the lake near their house to hatch eggs. There are very few pokestops around this path which is roughly 8 miles around, but lots of magikarp and the occasional dratini.
The first time I did it, I hatched 6 eggs (4x 5k, 2x 2k). Afterwards, I got 2x 10k eggs.
Over the 4 days, I didn't hit a single pokestop, just walked or biked and caught pokemon until I hatched every single one of my eggs.
Then, again, I received 2 10k eggs.
Anyone else experienced anything similar?
Edit: If you'd like to help get to the bottom of this, your data would be appreciated! Use this google form whenever you receive an egg
I was wondering if anyone here has followed the course Probability theory (401-3601-00L) in the past few years?
If yes, I would like how much it was based on what was seen during the rest of the bachelor in mathematics?
I am assuming that Probability and Statistics (401-2604-00L) is an assumed background. But is there any other course (e.g. analysis I, II, III or other) that is required to follow that course?
I am currently doing an interdisciplinary master's degree, and I was considering taking first "Probability and Statistics" and then "Probability theory", but my mathematical background is kind of bad.
Example taken from Yates & Goodman Markov Chain Supplement
I'm having trouble understanding why the communicating class {4,5,6} is aperiodic. Any help on understanding periodicity would be amazing!
... but that would be like calling number theory the study of strings of digits which terminate" -Tao
Pretty funny joke from
terrytao.files.wordpress.com/2011/02/matrix-book.pdf
well, about as funny as a measure theory joke can be, I guess.
(continued quote)
"At a practical level, the opposite is true: just as number theorists study concepts (e.g. primality) that have the same meaning in every numeral system that models the natural numbers, we shall see that probability theorists study concepts (e.g. independence) that have the same meaning in every measure space that models a family of events or random variables. And indeed, just as the natural numbers can be defined abstractly without reference to any numeral system (e.g. by the Peano axioms), core concepts of probability theory, such as random variables, can also be defined ab- stractly, without explicit mention of a measure space"
In this lecture on Probability:
https://www.youtube.com/watch?v=_FTYrQtrDps&list=PLbMVogVj5nJQqGHrpAloTec_lOKsG-foc&index=1
At 14:00, the lecturer argues an that axiomatic approach to probability theory is needed, because a simple intuitive approach to probability often results in apparent paradoxes.
One example he gives is Bertrand's Paradox, which is explained both in the lecture and the link bellow:
https://en.wikipedia.org/wiki/Bertrand_paradox_(probability)
Does anyone know of any other similar 'paradoxes' that arise in a pre-axiomatic approach to probability?
> Miner tactic: > > 1) mine BTC > > 2) keep BCC hashrate at survival level > > 3) sell mined BTC for BCC > > 4) leave some time to wallets and companies to add BCC > > 5) move hashrate from BTC to BCC crippling the old chain > > 6) enjoy the x10 gain
This would obviously be a very big "Fuck you!" to Blockstream and their trained puppets.
https://seeing-theory.brown.edu
Pretty good refresher relating to basic probability and stats concepts (especially helpful during interview prep).
Note: There's a pdf of the "book" available on the website so make sure to check that out.
I'm learning about martingales in a class that is very close to probability theory and I can't seem to understand the definition of martingales and the intuitive explanation doesn't help much.
I looked around, but it seems everyone is using the same definition (for discrete time martingales):
A martingale is a sequence of random variables X_0, X_1, ... that satisfies for every n:
E(|X_n|) < β
E( X_(n+1) | X_0, ... , X_n ) = X_n
What's confusing me is how the result of an expectation of a random variable is another random variable. It doesn't make sense, because normally the result of an expectation is a real number, so how can a random variable (which is a function) be the result?
I want to say it's called an Odin Brain? It's the idea that somewhere in space at some point the specific conditions will exist with the perfect combination of particles for a human being, consciousness and all, to blip in to existence. Somebody said that Ego from Gaurdians of the Galaxy 2 is one of these.
I will be junior next semester as cs major. I not sure what should i choose between these two class since i can only take one. I not sure what i want to do with my degree. I had experience with mathematical proof in my analysis, algebra class, etc so I dont think there will be a problem of mathematical maturity. Thank you.
Edit: The theory computation course is a brief intro to Computability theory, computation complexity, some ramdomize and approximation algorithms. And probability is just a standard course without any measure theory.
I remember reading this in the form of a 'long form' article with lots of pictures and illustrations and I believe it was written with a sort of sense of humor (not satirizing the theory or anything, just sort of making the article more fun to read), kind of how stuff comes across on the website the Oatmeal.
I really want to find this again, even if it's not the exact same article or essay I read before, because after reading it it was impossible to not feel utterly microscopically tiny in the big picture of everything that exists out there and it was a really weird trippy feeling.
Can we at this point just assume Whitehorse is a master manipulator hungry for money and his soldiers are simply brainwashed? Should we now stop seeking any "higher" purpose to his particle accelerator? Is he simply a greedy man who invented something in the shape of religion to achieve his goal?
Now let me just say I know there is zero way to "predict" the lottery, but I wonder if the law of averages and probability theory can be applied. There are many sites out there can usually give/archive data about certain lotteries, and I found them interesting.
For example, in Canada's Lotto 6/49 here were some interesting things I'm noticing based on some sites I use. Here's a screencap as I do not want to post the link outright and have it caught in a spam filter. So, what I've noticed:
I don't know, maybe there's nothing to it but don't you think it would help if for the next few draws, you picked the number 28 (or 39) + a number from the previous draw + only one numbers from 0-24)?
I know of the Gambler's Fallacy and the idea of "overdue numbers" not being a true reflection of how things operate, but statistically over the course of thousands of draws things tend to even out more or less. For example, the average number of "skips" per number is between 5-7, among other things.
I have attempted the following proof. Is it valid? I am aware of the other proof that makes use of P(\Omega) = P(\Omega) + P(Empty Set), hence P(Empty Set) = 0. But I am looking for another proof, and this is what I have come up with. Is it valid?
For those who have experiences with either of these classes, what opinions do you have?
I am currently taking Probability Theory I, which covers topics like random variables, continuous and discrete distributions, moment generating functions, joint probability distributions, etc. I like the class and am doing relatively well. I know I have to take Mathematical Statistics the next spring, and am currently planning courses for the fall.
Would you think it might be helpful for me take Probability Theory II to prepare for Math Stat? Prob. Theory II covers the following: "Markov chains, exponential distribution, Poisson process, continuous time Markov chains, Brownian motion, stationary processes".
I'm trying to decide between this and an unrelated class that I'm interested in, but I don't want to take the unrelated class if Prob. Theory II would help prepare me for the topics presented in Math. Stat. If it helps, Math. Stat covers the following: "Random sampling, sampling distributions, Student's t, chi-squared and F distributions, unbiasedness, minimum variance unbiased estimators, confidence intervals, tests of hypothesis, Neyman-Pearson Lemma, and uniformly most powerful tests". I'm a Mathematical Economics major, who's more interested in getting into finance, and more possibly the quantitative side as well. Thanks for any advice you have.
Hello!
Let {X_n}, n=1,2,3,.. be a sequence of i.i.d random variables, where E[X_i] = 0 and Var[X_i] = \sigman^2 for every i.
I want to compute the following:
`[; E[(\sum_{i=0}^n iX_i)^2] ;]`
Progress: My understanding of a square of a summation is that:
`[; (\sum_iX_i)^2 = \sum_i\sum_jX_iX_j ;]`
So basically I would get:
`[; E[(\sum_{i=0}^n iX_i)^2] = E[\sum_{i=0}^n \sum_{j=0}^n ijX_iX_j] = \sum_{i=0}^n \sum_{j=0}^n ijE[X_i]E[X_j] = 0;]`
Is this the correct way to approach this, or am I missing something?
In the context of measure theory and probability spaces in particular, I'm having trouble "seeing" sets that contain subsets of greater sets. An example can include a generic probability space (Ξ©, F, P). It trips me up in proofs and such. I have far less trouble dealing with a set that just has "simple" elements. But when speaking about a set with subsets and measures on subsets of said set, I just go "Whoa, whoa, whoa. Hold on! Time out!"
Any tips for how I can get past this seemingly meta world of set theory and finally begin diving into the intricacies of measure theory?
I am planning on taking an honours-level measure theory course next semester as I have been really interested in fields of mathematics that involve geometry and calculus. However, when I tried to search for more information a lot of people seem to discuss measure theory in relation to probability theory, which is a field that I am not particularly interested in.
Fields that I would be interested in would be something like differential geometry, partial differential equations, functional analysis, and complex analysis. So far, I have taken a course in real analysis, elementary number theory, abstract algebra and linear algebra.
Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.