A list of puns related to "Laplace's demon"
Has anyone heard of Laplace's Demon before?
https://en.m.wikipedia.org/wiki/Laplace%27s_demon
What does it mean? Does it mean: if we could know everything in the past, we could predict the future?
Thanks
Why do so many animes mention the Demon of Laplace.Maybe its not as many as I think.But the last 4 animes I have watched all mention the demon of laplace somewhere.And in 3 of them it wasnt just mentioned once but had something to do with the main plot.
Why did determinism in 1814 need an ultimate observer, when they were still working in a Newtonian framework?
A case for free will against the fatalistic view of the universe is that if fatalism is correct then if you took a snapshot of the universe at any given moment, you would know (given you had the capabilities to compute the result given you knew the location and state of all atoms of the universe) what you action you are going to take in some circumstance. Say you knew the phone was gonna ring due to a fatalistic calculation, then you could use physics to determine whether you will answer the phone or not. And if you knew the answer to this than you could just do the opposite of whatever the calculation predicted. Thus, free will must be real because you can always will the opposite of the action calculated you would take.
This is however not argument for free will, but it is still a mindfuck. You don't need free will for this same circumstance to hold. For instance if you calculated what a simple decision making robot would do and tell it to the robot, and the robot always does the opposite of what you tell it (as it is programed to do), the same circumstance would hold, even though the robot obviously doesn't have free will.
Still though this is a mind twisting concept, if you could calculate exactly what the robot would do and it would always do the opposite, then you clearly can't predict what it will do. Thus, this thought experiment is a horrible case for free will, but a good case against fatalism. But fatalism to me has still got to hold, so what is the answer to this conundrum, is it just like a paradox (like the sentence "this sentence is false") and simply an illogical thing to calculate, or what is the solution?
I'm confused about the general interpretation between determinism and quantum mechanics. Over 200 years ago Laplace claimed, that if an entity (he called it the demon) would know the position and impulse of all particles that were created during BBN, this entity could with enough computing power predict the whole future of the cosmos. Of course Heisenberg added correctly, that it is not possible to determine both properties (position and impulse) accurately at the same time (besides Heisenbergs approach there are several different argumentations against Laplace's Determinism). But why do we then argue from Heisenbergs Point of View, that the universe at all ist not absolutely deterministic? Just because we (and our technology) are not able to measure both properties at the same time, must we then conclude that a particle could not have a certain position AND a certain impulse at one and the same time? Why do we link the measurement (of course not necessarily in form of a human observer) with the fundamental reality of particles? Especially when arguing that cosmos itself could be considered as the calculating Demon? What hint am i missing? Can anyone help? Thank you!
We have two upcoming talks in April for our ongoing online seminar series about Bayesian machine learning at scale.Β The intended audience includes machine learning practitioners and statisticians from academia and industry.
Upcoming talks, with Zoom registration links:
This coming Wednesday, 7 April
Monte Carlo integration with repulsive point processes - RΓ©mi Bardenet
Monte Carlo integration is the workhorse of Bayesian inference, but the mean square error of Monte Carlo estimators decreases slowly, typically as 1/N, where N is the number of integrand evaluations. This becomes a bottleneck in Bayesian applications where evaluating the integrand can take tens of seconds, like in the life sciences, where evaluating the likelihood often requires solving a large system of differential equations. I will present two approaches to faster Monte Carlo rates using interacting particle systems. First, I will show how results from random matrix theory lead to a stochastic version of Gaussian quadrature in any dimension d, with mean square error decreasing as 1/N^{1+1/d}. This quadrature is based on determinantal point processes, which can be argued to be the kernel machine of point processes. Second, I will show how to further take this error rate down assuming the integrand is smooth. In particular, I will give a tight error bound when the integrand belongs to any arbitrary reproducing kernel Hilbert space, using a mixture of determinantal point processes tailored to that space. This mixture is reminiscent of volume sampling, a randomized experimental design used in linear regression.
Joint work with Ayoub Belhadji, Pierre Chainais, and Adrien Hardy
21 April
I discuss a structured way for efficient inference in probabilistic graphical models with building blocks consisting of Markovian stochastic processes. The starting point is a generative model, a forward description of the probabilistic dynamics. The information provided by observations can be backpropagated through the model to transform the generative (forward) model into a conditional model guided by the data. It approximates the actual conditional model with known likelihood-ratio between the two. The backward filter and the forward change of measure
... keep reading on reddit β‘In a nutshell, Laplace's Demon is that if someone (the demon) could know the precise location and movement of every atom, with their past, present, and future values being known, they could know all and everything there is to be known.
#THE POWER
For 5-30 minutes (depends on what you're using the power for) you can know precise location and movement of every atom , with their past, present, and possible future values being known also, you can perfectly retrace the past and by piecing the knowledge given to you to almost perfectly predict the future. You can also never get hit since you're aware of every atom, you can rarely ever get hit for obvious reasons.
#LIMITATIONS
Your knowledge of all the atoms cant transcend your planet and can only can be done with extreme focus and discipline, same with having the knowledge of all the atoms on earth (your knowledge of the rest of the universes atoms will fade fairly quickly since you have a human mind).
The range of the power is 100 yards, so you cant know about *X* until it enters your radius. You can diel the radius at will.
Your power has a blind spot, the futures you predict isn't always 100%. Its just what's MOST likely to happen.
#MASTERY
If the user mastered this power, they could use it in even more precise ways. Like only focusing on the atoms of one person. If you did this, the range and time limit of your power would be greater since you're only focusing on a single person.
We have two upcoming talks in February for our ongoing online seminar series about Bayesian machine learning at scale.Β The intended audience includes machine learning practitioners and statisticians from academia and industry.
Upcoming talks, with Zoom registration links:
This coming Wednesday, 10 Feb
Backfitting for large scale crossed random effects regressions - Art Owen
Large scale genomic and electronic commerce data sets often have a crossed random effects structure, arising from genotypes x environments or customers x products. Naive methods of handling such data will produce inferences that do not generalize. Regression models that properly account for crossed random effects can be very expensive to compute. The cost of both generalized least squares and Gibbs sampling can easily grow as N^(3/2) (or worse) for N observations. Papaspiliopoulos, Roberts and Zanella (2020) present a collapsed Gibbs sampler that costs O(N), but under an extremely stringent sampling model. We propose a backfitting algorithm to compute a generalized least squares estimate and prove that it costs O(N) under greatly relaxed though still strict sampling assumptions. Empirically, the backfitting algorithm costs O(N) under further relaxed assumptions. We illustrate the new algorithm on a ratings data set from Stitch Fix.
Joint work with Swarnadip Ghosh and Trevor Hastie of Stanford University.
24 Feb
Approximate Bayesian computation with surrogate posteriors - Florence Forbes
A key ingredient in approximate Bayesian computation (ABC) procedures is the choice of a discrepancy that describes how different the simulated and observed data are, often based on a set of summary statistics when the data cannot be compared directly. Unless discrepancy and summaries are available from experts or prior knowledge, which seldom occurs, they have to be chosen and this can affect the approximations. Their choice is an active research topic which has mainly considered data discrepancies requiring samples of observations or distances between summary statistics, to date. In this work, we introduce a preliminary learning step in which surrogate posteriors are built from finite Gaussian mixtures using an inverse regression approach. These surrogate posteriors are then used in place of summary statistics and compared using metrics between distribu
... keep reading on reddit β‘We have three upcoming talks for our ongoing online seminar series about Bayesian machine learning at scale.Β The intended audience includes machine learning practitioners and statisticians from academia and industry.
Upcoming talks, with Zoom registration links:
This coming Wednesday, 18 Nov
Interpreting Bayesian Deep Neural Networks Through Variable Importance - Sarah Filippi
While the success of deep neural networks is well-established across a variety of domains, our ability to explain and interpret these methods is limited. Unlike previously proposed local methods which try to explain particular classification decisions, we focus on global interpretability and ask a generally applicable, yet understudied, question: given a trained model, which input features are the most important? In the context of neural networks, a feature is rarely important on its own, so our strategy is specifically designed to leverage partial covariance structures and incorporate variable interactions into our proposed feature ranking. Here, we extend the recently proposed ``RelATive cEntrality'' (RATE) measure proposed by Crawford et al (2018) to the Bayesian deep learning setting. Given a trained network, RATE applies an information theoretic criterion to the posterior distribution of effect sizes to assess feature significance. Importantly, unlike competing approaches, our method does not require tuning parameters which can be costly and difficult to select.
2 Dec
SMC (Sequential Monte Carlo) samplers present clear several advantages over MCMC (Markov chain Monte Carlo). In particular, they require little tuning (or more precisely, it is easy to automate their tuning to a given problem); they are easy to parallelize; and they allow for estimating the marginal likelihood of the target distribution. In this talk, I will discuss how SMC may be used in various problems in machine learning and computational statistics, and why they remain slightly overlooked in these areas. One possible reason (among several others) is that the following difficulty with SMC samplers may have been overlooked in the literature: that, to obtain optimal performance, one may need to apply a large number of MCMC steps at each itera
... keep reading on reddit β‘These five demons spawn naked on a featureless plain, arranged on the outer vertices of a pentagram whose edges are 100m in length. Being demons, they are all bloodlusted, desiring nothing more than to emerge victorious over their enemies in vicious, deadly combat. Physiologically, they are 'peak human', though each possesses a special ability:
Descartes' demon can cause sensory illusions on a single foe at a time. If desired, these are indistinguishable from non-illusory observation, and can vary to arbitrary degrees of nuance or precision, from a subtle shift in position to total sensory blackout. Targets can be switched instantly, with no cooldown, but only one target can be affected at any given time. Illusion complexity needs no additional thought on Descartes' demon's end, and each illusion requires only fleeting thought to activate -- while maintaining an illusion, Descartes' demon can still act, though with very slightly impaired attention. These illusions are limited to sight and sound, though, so no inducing infinite nociception, thermoception, or other non-audiovisual sense experience in targets.
Laplace's demon knows the exact position of each of the other demons, down to their smallest units, and can in this manner foresee their future movements in exacting detail. However, entering and exiting this state of local omniscience requires a quarter second of concentration, during which Laplace's demon is vulnerable to outside action. In this state, Laplace's demon cannot be fooled by Descartes' demon's illusions. However, the future gleaned in Laplace's demon's visions are conditional on their remaining indefinitely in their meditative trance -- by exiting the state, they chart a new future history, rendering what precognitive knowledge they might affect by their own actions (that doesnβt involve unceremonious defeat) void. Laplace's demon can also predict those demons able to violate physical law.
Nietzsche's demon is able to eternally return from any non-lethal injury, rewriting any damage it has sustained with perfect health. This ability activates once every second. However, damage that would be otherwise be fatal (if unattended for, say, five minutes) is not healed, so they can still be killed through the application of sufficient force.
Darwin's demon can reproduce itself once per minute through instantaneous binary fission. The matter for this reproduction is conjured ex nihilo, and clones of Darwin's demon are in turn
... keep reading on reddit β‘>#Introduction
Laplace no Ma is a JRPG with Horror elements from the 80s, 90s era. Two missing boys found dead near the entrance of Weathertop Hall. A mansion located on the outskirts of Newcam. Word has reached you. Upon arriving at Newcam, things have gotten worse, a girl now missing, last seen, near the mansion. Feeling there are more meets the eye, with the clues being the mansion. Before leaving, locals talk about Weathertop Hall supposedly haunted.
>#Presentation
The intro at the start set a lofty goal of the game's tone and story. I like the art direction.
The plot reveals some notes, paralleling with Lovecraft's The Dream in the Witch-House by mentioning Laplace's Demon. Both take math in different directions. Laplace's Demon named after Pierre-Simon de Laplace (1814). As follow: "if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics [Source:A Philosophical Essay on Probabilities]." Nonetheless, the writing is a mixed bagβleaning into good. Enemies overall design are Gothic, from Vampires to Ghosts. One is inspired by Lovecraft's mythos like the ghouls from Pickman's Model and The Dream-Quest of Unknown Kadath. As further, you traverse Weathertop Hall, it becomes more apparent the focus indeed on Gothic.
The translation is good there are some odd arrangements with the party names. I would guess, cause overlapping previously. Two years ago, this was a requested fix. Menus themself are basic, yet I do like the green and gold combination with the white lettering. Overall Aeon Genesis did an admirable job with the translation.
Music suffered from mistaken identity and repetition. While in battle, the music shifts into a chipperβhappier toneβupbeat that feels out of place in a game marketed as Horror, as opposed to the sombre mood exploring the Hall. Had me laughing how foolishly happy it sounds.
>#Gameplayββ
So how does the gameplay fairs compare to the presentation? Well, it's better.
After starting, you have an option of picking between male or female it doesn't sound much during the first literation of Laplace (1987) I would imagine it was a big deal in the heyday. Next, picking a class;
We have recently launched an ongoing online seminar series about Bayesian machine learning at scale.Β The intended audience includes machine learning practitioners and statisticians from academia and industry.
Registration is now open for Jake Hofmanβs 17 June talk: "How visualizing inferential uncertainty can mislead readers about treatment effects in scientific results".Β Jake is a Senior Principal Researcher at Microsoft Research, New York.Β We very much look forward to his insights on visualizing uncertainty.Β The talk is at 15.00 UTC this Wednesday, June 17; to see it in your local time zone please go to the registration page.Β Please register at: https://ailab.criteo.com/laplaces-demon-bayesian-machine-learning-at-scale/
Secondly, Christian Robert's talk on approximate Bayesian computation is now online.Β Christian not only presents state-of-the-art results showing ABC using Gibbs-like steps, but also takes time to give the basis of ABC methods and takes many questions.Β https://www.youtube.com/watch?v=Aq4juvSsz9Y
Finally we have a new website, giving details of upcoming talks including A/Prof Aki Vehatari's 24 June talk on "Use of reference models in variable selection".Β https://ailab.criteo.com/laplaces-demon-bayesian-machine-learning-at-scale/
Also upcoming:
So, as a pseudo-philosopher that often quotes philosophy stuff online to seem smarter in internet flame wars, I'm really disappointed in myself for not seeing this sooner.
For those that don't know, Laplace's Demon was a thought experiment put forward by Pierre-Simon Laplace, and it kinda falls in the same category as the more famous Schrodinger's Cat. Basically, the idea is that certain things are absolutely going to happen because of the way things react to one another in the physical world. If a ball rolls off a table, the ball will have a certain speed based on its weight, the material it's made of, the friction of the ball and table, the gravity of the earth at that location, the distance the ball has to accelerate, all that stuff. When you light something on fire, the chemical reaction of that object and heat will determine the height of the flames, the color, the shape the flame makes, etc.
Laplace put forward the idea that the future could basically be predicted if you knew exactly how everything would interact with each other. This would mean knowing every single atom in the universe, in every object in the universe, their exact distances from other atoms, and the exact ways they would interact with each other in a physical and chemical fashion. Obviously this kinda of calculation is impossible, and would never be possible for us no matter how many thousands of years go by and no matter how sophisticated our computers can get. For someone - or something - to be able to perform this feat, they would have to be a God... or a Demon. hence, Laplace's Demon.
This brings me back to the concept of Naming in The Kingkiller Chronicles. If you've read the books I don't think i need to explain the similarities in concept, I just thought it was a neat thing to point out. Hopefully for some this gave a better insight into the colossal task it is to learn the Name of even one tiny thing, as Elodin said in his lecture in the second book:
'He reached into a pocket and pulled out a river stone, smooth and dark. "Describe the precise shape of this. Tell me of the weight and pressure that forged it from sand and sediment. Tell me how the light reflects from it. Tell me how the world pulls at the mass of it, how the wind cups it as it moves through the air. Tell me how the traces of its iron will feel the calling of a loden-stone. All of these things and a hundred thousand more make up the name of this stone." he held it out to us at arm's length. "This single, s
... keep reading on reddit β‘Pierre-Simon Laplace in 1814 published the first discussion of determinism. Laplace uses a 'demon' as his quantifying component where Devs uses the computer, but the scale and implication of the two seem directly comparable.
Apologises if this has been posted or discussed already, I found it interesting having seen Devs before learning of 'Laplace's Demon'.
I saw a post at the beginning of the show where somebody suggested the theory of Laplace's Demon. In the finale where Forest states:
"The state of every particle is related to the states of the particles around it.
Understand the state of one.
Understand the state of the other...
...understand the state of everything.
Big data. The data of all things."
That is basically a rough Laplace's demon definition, however this theory was based on the idea of classical mechanics reversibility, whereas there is actually an open debate that presents the modern quantum and thermodynamic irreversibility as opposite to that.
Mr Garland has definitely made his homework including all of this to his work and it's such a pleasure to dive into it.
I caught it on Amazon Prime (EU) and loved it. For what seems to be a low-budget production, they were able to pull off a suspenseful and tense mystery-thriller in stylish black & white.
Here is the brief plot synopsis from IMDb:
> A team of researchers have developed a system to calculate seemingly random events. A mysterious professor invites them to his remote house on a rock in the middle of the ocean. All they have to do is survive the night.
Definitely deserves more recognition.
Edit: I recommend not watching more than the first 20-25 seconds of the trailers/teasers for a better suprise effect!
>#Introduction
Laplace no Ma is a JRPG with Horror elements from the 80s, 90s era. Two missing boys found dead near the entrance of Weathertop Hall. A mansion located on the outskirts of Newcam. Word has reached you. Upon arriving at Newcam, things have gotten worse, a girl now missing, last seen, near the mansion. Feeling there are more meets the eye, with the clues being the mansion. Before leaving, locals talk about Weathertop Hall supposedly haunted.
>#Presentation
The intro at the start set a lofty goal of the game's tone and story. I like the art direction.
The plot reveals some notes, paralleling with Lovecraft's The Dream in the Witch-House by mentioning Laplace's Demon. Both take math in different directions. Laplace's Demon named after Pierre-Simon de Laplace (1814). As follow: "if someone (the demon) knows the precise location and momentum of every atom in the universe, their past and future values for any given time are entailed; they can be calculated from the laws of classical mechanics [Source:A Philosophical Essay on Probabilities]." Nonetheless, the writing is a mixed bagβleaning into good. Enemies overall design are Gothic, from Vampires to Ghosts. One is inspired by Lovecraft's mythos like the ghouls from Pickman's Model and The Dream-Quest of Unknown Kadath. As further, you traverse Weathertop Hall, it becomes more apparent the focus indeed on Gothic.
The translation is good there are some odd arrangements with the party names. I would guess, cause overlapping previously. Two years ago, this was a requested fix. Menus themself are basic, yet I do like the green and gold combination with the white lettering. Overall Aeon Genesis did an admirable job with the translation.
Music suffered from mistaken identity and repetition. While in battle, the music shifts into a chipperβhappier toneβupbeat that feels out of place in a game marketed as Horror, as opposed to the sombre mood exploring the Hall. Had me laughing how foolishly happy it sounds.
>#Gameplayββ
So how does the gameplay fairs compare to the presentation? Well, it's better.
After starting, you have an option of picking between male or female it doesn't sound much during the first literation of Laplace (1987) I would imagine it was a big deal in the heyday. Next, picking a class;
We have two final upcoming talks in 2020 for our ongoing online seminar series about Bayesian machine learning at scale.Β The intended audience includes machine learning practitioners and statisticians from academia and industry.
Upcoming talks, with Zoom registration links:
This coming Wednesday, 2 Dec
SMC (Sequential Monte Carlo) samplers present clear several advantages over MCMC (Markov chain Monte Carlo). In particular, they require little tuning (or more precisely, it is easy to automate their tuning to a given problem); they are easy to parallelize; and they allow for estimating the marginal likelihood of the target distribution. In this talk, I will discuss how SMC may be used in various problems in machine learning and computational statistics, and why they remain slightly overlooked in these areas. One possible reason (among several others) is that the following difficulty with SMC samplers may have been overlooked in the literature: that, to obtain optimal performance, one may need to apply a large number of MCMC steps at each iteration.
I will also present a recent paper (joint work with Hai-Dang Dau) where we develop a new type of SMC sampler, where all the intermediate Markov steps are used as "particles". That makes the resulting algorithm typically more efficient, and more importantly much more robust to user choices, and thus ultimately easier to use.
16 Dec
On MCMC for variationally sparse Gaussian process: A pseudo-marginal approach - Sara WadeΒ
Gaussian processes (GPs) are frequently used in machine learning and statistics to construct powerful models. However, when employing GPs in practice, important considerations must be made, regarding the high computational burden, approximation of the posterior, form of the covariance function and inference of its hyperparmeters. To address these issues,Β [Hensman et al. (2015)](https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fpapers.nips.cc%2Fpaper%2F5875-mcmc-for-variationally-sparse-gaussian-processes.pdf&data=04%7C01%7Cd.rohde%40criteo.com%7Cebd9ac93cb084e8a927008d87ccaf583%7C2a35d8fd574d48e3927c8c398e225a01%7C1%7C1%7C637396558442081843%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik
... keep reading on reddit β‘Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.