A list of puns related to "Quantum Measurement"
If we could simulate /solve the shrodinger equation for a large enough quantum system that includes both particles under test AND the measurement setup, would that show that wave function collapse comes from interaction with the test system...and therefore make wavefunction collapse less mysterious?
Could we do this with a quantum computer?
Alice and Bob create a classically nonclassical pair of electrons and then go their separate ways.
On Day 1: Bob measures the electron to be Spin Up. He then measures it again at a 45 degree angle to create a random result, and then measures it a third time in the axis of his original measurement - essentially rerolling the quantum dice.
If he gets Spin Down, Bob stops. If he gets Spin Up again, he repeats the process until he gets Spin Down.
Bob leaves the particle undisturbed in the Spin Down state.
On Day 2: Alice measures her particle. She does so on the same axis as Bob's Up and Down measurements.
Will Alice measure Spin Up (correlated with the current state of Bob's Particle) or Spin Down (correlated with the state that Bob originally measured)?
I know this type of question has been done to death but as I understand it, the main issue with using quantum entanglement as a means of communication is that without another form of confirmation, which wouldn't be FTL, there isn't a way for either party to know when a particle has been measured. Why I've never seen anyone try to work around the problem this way is likely because it's so stupid no one even bothered (I fully expect this to be a dumb idea), but I'm a curious monkey and I've been told asking questions is a good thing.
So before the thought experiment, I'm gonna start with two assumptions, since I don't fully comprehend the limits on measuring a particle's superposition. I'm assuming first, that the moment you stop measuring the particle; the chances of it being a certain position are entirely random (meaning there is no "buffer", the moment it's no longer measured it could be either position when remeasured). Second, when a measurement is made, so long as you continue to measure the position it cannot change and neither will its entangled partner (within that spin).
The thought experiment is as follows, Jake and Amber have created two pairs of quantum entangled particles. Each pair of particles has one particle labeled as output and the other as input. Jake and Amber want to see if they can send a single random letter to each other in 8-bit binary. Jake will take the input particle of one pair and the output particle of the other pair and travel to the opposite side of the globe. Before Jake leaves however, both of them have decided to measure the particles with the following rules. One, the measurements for the output particle will be taken when it is noon (12 pm) for Amber, so Jake has to calculate what time will it be for him when it is noon for Amber which will be midnight (12 am). Second, when they have the correct position on their input particle, they must hold that measurement until 12:05 pm Amber's time so that the other has five minutes to repeatedly measure their output particle. Finally, both will be measuring the up/down spin of the particles, and up corresponds to 1 and down to 0.
So Jake takes his particles to the other side of the world and both him and Amber pick a random letter to convert and begin measuring their input particles until they measure the correct spin. For example, Jake wants Amber's output particle to read as spin down (0), so he measures his input particle until it measures spin up (1). When they get t
... keep reading on reddit β‘According to the Wikipedia article on quantum entanglement, when two particles interact with each other, they get entangled I think.
In the case of the EPR paradox, the state of the two particle system is like: 1/ square root of 2(spin of electron 1 up Γ spin of electron 2 down) + 1/ square root of 2(spin of electron 1 down Γ spin of electron 2 up).
But, does this quantum entanglement also extend to measurement apparatus?
Let us consider an electron. I measure the spin of the electron in z axis. I find that the spin is up.
I think that the explanation for this according to the Copenhagen interpretation is that the electron was in a superposition of being both up and down before measurement. After I do the measurement, the superposition collapses and I find the spin to be up.
However, according to relational quantum mechanics, the explanation is like this:
The electron is considered as the system S. Before measurement, the electron is in the state: alpha (up) + beta (down).
Let us consider the measurement apparatus as the observer O.
Once O measures the spin of the electron, according to O, the state of the electron collapses to up.
Now, let us consider another observer P.
This observer is considering the state of the S-O system.
According to RQM, the state of the S-O system is alpha ( spin of S is up Γ pointer variable of O pointing towards up) + beta ( spin of S is down Γ pointer variable of O is pointing towards down).
So, according to RQM, the system S has collapsed to up for O but the system S is still in a superposition according to P.
Now, if I measure the state of the electron in z axis again, I would find that the spin of the electron continues to be up.
However, if I measure the spin of the electron in x axis, then there is a 50% probability that the spin would be up and 50% probability that the spin would be down.
So, my question is: is there some difference between the quantum entanglement of two electrons, and the proposed entanglement between the electron and the measurement apparatus according to RQM?
Moreover, can this entanglement also extend to the level of human beings who observe and maybe make a record of the value of the spin of the electron?
So the standard explanation which has been presented when people learn about wave-particle duality and the double-slit experiment is that it is simply the 'act of measurement' that causes wave function collapse. The measurement apparatus itself has some sort of effect on the waves being measured that leads to collapse.
However the delayed choice quantum eraser experiment seems to imply that the act of measurement is actually not what causes collapse at all, it is the presence of non-scrambled decipherable which-way 'information' that seems to cause the wave function to collapse. This is because the which-path information can be scrambled after measurement takes place and after the corresponding entangled photon(s) hit 'D0' and we still get an interference pattern at D0 (the interference screen) - Edit: well only after we sort out the noise on the screen with the info from the detectors does the interferene pattern become apparent. If we remove the scrambling of information, the interference pattern at D0 disappears. I read an article that concluded: 'Somehow the universe found a way to ensure that actual information canβt travel faster than light and thus causality is preserved. What can be said though is that on a quantum level, particles can affect wave functions in the past without breaking causality'. Is this correct?
Am I correct here, or am I a complete idiot? Because a lot of people keep saying that this experiment is misinterpreted and it doesn't show anything special at all. So OK, the misinterpretation is that there is 'retrocausality', but is the correct take away from this experiment that it is the decipherable which-path information itself that causes 'collapse'?
If this is correct, have physicists been speculating as to why this is the case? Why would information rather than measurement cause collapse? What is special about such 'information'. And if my interpretation is completely wrong, does anyone have a somewhat clear straight-forward explanation for this experiment? Because I've come across opinion pieces that state that this experiment tells us nothing new at all and it is just bad explanatory wording which has caused misunderstanding and controversy, while others state the opposite.
Edit2: this video reinforces this point further by stating how this experiment shows that the wave property of the photon can't be collapsed by the physical properties of the detectors; [https://www.youtube.com/watch?v=u9bXolOFAB8](https://www.you
... keep reading on reddit β‘What happens if the measurement outcome is somewhere in the continuous part of the spectrum? What would the state after the measurement be?
I need help clarifying the controversy over the double slit experiment. The explanation everyone keeps giving is that the photon or particle or whatever Behaves like a wave when the path is not known but when a detector is placed and the path of the particle is known it behaves like a particle. And the Copenhagen interpretation of that phenomena is that the wave function collapses when it is measured or observed. Correct?
The problem I am having is not with the Copenhagen interpretation but with the initial characterization of the phenomena. I don't understand why they think it's behaves as a wave. Because the photon --an individual photon- never produces an interference pattern on its own. Yes, many photons sent one after another , Create an an interference pattern but every photon always shows up on the detector as a point particle. Therefore it seems more concise or accurate to say that the probability forms a wave when the path is not known. And when a detector is placed the probability forms a particle like pattern. In other words the fundamental nature of a photon doesn't change back and forth between a wave and a point particle. It's probability distribution changes between a wavelike distribution and a point particle like a distribution. Correct or have I made an error?,
https://arxiv.org/abs/1811.11060.
This article is another take on the idea that you really don't need to add the Born rule or assume it as a postulate as it is really the only rule that could make sense. In some sense this paper is a bit tighter than Gleason's theorem but that depends on what assumptions you like.
I am just wondering if anyone here has looked at this in detail and have any interesting reactions to it. My reaction is "great, but I don't have any problem with Gleason's theorem! I am already pretty well satisfied that any other probability assignment to a Hilbert space just 'won't work'. " Nevertheless I do still love reading about this kind of thing, and if anyone knows of any recent work that tries to wrap all this up in a nice bow I would appreciate the link!
Hey all,
I recently covered the concept of measurement in my intro QM class and it seems absolutely absurd (in the framework we've been formalizing) that you can just willy-nilly collapse a wave function purely based on the basis of measurement: that certainty can arise from uncertainty so easily.
I remember Sean mentioned that measurement in the many worlds interpretation in his book is defined by the entanglement of the observer with the system in a larger, more encompassing wave function. In that case, does that mean 'measurement' itself is an operator with its own eigenstates?
I came across this claim in a Japanese piece but for the sake of translation and better clarity I wanted to seek an answer here. I could be wrong in the reading of this piece, but from my understanding it nullifies the problem of measurement by making it a categorical error. I did not find their argument convincing in the original Japanese piece, but in doing a few searches around the internet I found an article in support of this claim - this article below discusses the epistemological understanding of the Copenhagen interpretation:
https://www.sjsu.edu/faculty/watkins/copenhageninterp4.htm
In this claim, the epistemological reason of the wavefunction collapse can be attributed to time spent probability density function. I understand that there is not one correct definition of the Copenhagen interpretation and it is a mixture of hypotheses at the time, however under this posit the interpretations are historical artifacts that provided accurate mathematical models of predicting the location of particles and serve only for the purpose of instrumentalism. It should then follow that the SchrΓΆdingerβs cat was never a paradox to begin with, because it made a categorical error in applying an ontological (i.e. a hypothesis of describing what it does in reality) interpretation assuming it was epistemological one (how it actually is).
So does the measurement problem no longer really exist? Iβve found conflicting information online on this topic and not many sources I found directly debate the issue as a categorical discussion. From what scanty material I found, the school of thought to attribute the measurement problem is the limitation of our empirical based science - everything must be measured objectively, and therefore requires an observer. This does not preclude the possibility that things can happen outside of observation. In particular, I've read through this post on Classical concepts, properties on this sub that seems to somewhat touch on this matter but is not conclusive from my reading. In particular, there is a discussion in the wikipedia link in that thread which mentions the following:
In a broad sense, scientific theory can be viewed as offering scientific realism*βappr
... keep reading on reddit β‘Please note that this site uses cookies to personalise content and adverts, to provide social media features, and to analyse web traffic. Click here for more information.