• 0 Posts
  • 3 Comments
Joined 21 days ago
cake
Cake day: November 30th, 2024

help-circle
  • This problem presupposes metaphysical realism, so you have to be a metaphysical realist to take the problem seriously. Metaphysical realism is a particular kind of indirect realism whereby you posit that everything we observe is in some sense not real, sometimes likened to a kind of “illusion” created by the mammalian brain (I’ve also seen people describe it as an “internal simulation”), called “consciousness” or sometimes “subjective experience” with the adjective “subjective” used to make it clear it is being interpreted as something unique to conscious subjects and not ontologically real.

    If everything we observe is in some sense not reality, then “true” reality must by definition be independent of what we observe. If this is the case, then it opens up a whole bunch of confusing philosophical problems, as it would logically mean the entire universe is invisible/unobservable/nonexperiential, except in the precise configuration of matter in the human brain which somehow “gives rise to” this property of visibility/observability/experience. It seems difficult to explain this without just presupposing this property arbitrarily attaches itself to brains in a particular configuration, i.e. to treat it as strongly emergent, which is effectively just dualism, indeed the founder of the “hard problem of consciousness” is a self-described dualist.

    This philosophical problem does not exist in direct realist schools of philosophy, however, such as Jocelyn Benoist’s contextual realism, Carlo Rovelli’s weak realism, or in Alexander Bogdanov’s empiriomonism. It is solely a philosophical problem for metaphysical realists, because they begin by positing that there exists some fundamental gap between what we observe and “true” reality, then later have to figure out how to mend the gap. Direct realist philosophies never posit this gap in the first place and treat reality as precisely equivalent to what we observe it to be, so it simply does not posit the existence of “consciousness” and it would seem odd in a direct realist standpoint to even call experience “subjective.”

    The “hard problem” and the “mind-body problem” are the main reasons I consider myself a direct realist. I find that it is a completely insoluble contradiction at the heart of metaphysical realism, I don’t think it even can be solved because you cannot posit a fundamental gap and then mend the gap later without contradicting yourself. There has to be no gap from the get-go. I see these “problems” as not things to be “solved,” but just a proof-by-contradiction that metaphysical realism is incorrect. All the arguments against direct realism, on the other hand, are very weak and people who espouse them don’t seem to give them much thought.



  • To put it as simply as possible, in quantum mechanics, the outcome of events is random, but unlike classical probability theory, you can express probabilities as complex numbers. For example, it makes sense in quantum mechanics to say an event has a -70.7i% chance of occurring. This is bit cumbersome to explain, but the purpose of this is that there is a relationship between [the relative orientation between the measurement settings and the physical system being measured] and [the probability of measuring a particular outcome]. Using complex numbers gives you the additional degrees of freedom needed to represent both of these things simulateously and thus relate them together.

    In classical probability theory, since probabilities are only between 0% and 100%, they can only accumulate, while the fact probabilities in quantum mechanics can be negative allows for them to cancel each other out. You can have the likelihood of one event not add onto another, but if it is negative, basically subtract from it, giving you a total chance of 0% of it occurring. This is known as destructive interference and is pretty much the hallmark effect of quantum mechanics. Even entanglement is really just interference between statistically correlated systems.

    If you have seen the double-slit experiment, the particle has some probability of going through one slit or the other, and depending on which slit it goes through, it will have some probability of landing somewhere on the screen. You can compute these two possible paths separately and get two separate probability distributions for where it will land on the screen, which would look like two blobs of possible locations. However, since you do not know which slit it will pass through, to compute the final distribution you need to overlap those two probability distributions, effectively adding the two blobs together. What you find is that some parts of the two distributions cancel each other out, leaving a 0% chance that the particle will land there, which is why there are dark bands that show up in the screen, what is referred to as the interference pattern.

    Complex-valued probabilities are so strange that some physicists have speculated that maybe there is an issue with the theory. The physicist David Bohm for example had the idea of separating the complex numbers into their real and imaginary parts, and just using two separate real functions. When he did that, he found he could replace the complex-valued probabilities with real-valued probabilities alongside a propagating “pilot wave,” kinda like a field.

    However, the physicist John Bell later showed that if you do this, then the only way to reproduce the predictions of quantum mechanics would be to violate the speed of light limit. This “pilot wave” field would not be compatible with other known laws of physics, specifically special relativity. Indeed, he would publish a theorem that proves that any attempt to get rid of these weird canceling probabilities and replacing them with more classical probabilities ends up breaking other known laws of physics.

    That’s precisely where “entanglement” comes into the picture. Entanglement is just a fancy word for a statistically correlated system. But the statistics of correlated systems, when you have complex-valued probabilities, can make different predictions than when you have only real-valued probabilities, it can lead to certain cancellations that you would not expect otherwise. What Bell proved is that these cancellations in an entangled system could only be reproduced with a classical probability theory if it violated the speed of light limit. Despite common misconception, Bell did not prove there is anything superluminal in quantum mechanics, only that you cannot replace quantum mechanics with a classical-esque theory without it violating the speed of light limit.

    Despite the fact that there are no speed of light violations in quantum mechanics, these interference effects have results that are similar to that if you could violate the speed of light limit. This ultimately allows you to have more efficient processing of information and information exchange throughout the system.

    A simple example of this is the quantum superdense coding. Let’s say I want to send a person a two-qubit message (a qubit is like a bit, either 0 or 1), but I don’t know what the message is, but I send him a single qubit now anyways. Then, a year later, I decide what the message should be, so I send him another qubit. Interestingly enough, it is in principle to setup a situation whereby the recipient, who now has two qubits, could receive both qubits you intend to send across those two qubits they possess, despite the fact you transmitted one of those long before you even decided what you wanted the message to be.

    It’s important to understand that this is not because qubits can actually carry more than one bit of information. No one has ever observed a qubit that was not either a 0 or 1. It cannot be both simulateously nor hold any additional information beyond 0 or 1. It is purely a result of the strange cancellation effects of the probabilities, that the likelihoods of different events occurring cancel out in a way that is very different from your everyday intuition, and you can make clever use of it to cause information to be (locally) exchanged throughout a system more efficiently than should be possible in classical probability theory.

    There is another fun example that is known as the CHSH game. The game is simple, each team is composed of two members who at the start of the round are each given a card with randomly the numbers 0 or 1. The number on the card given to the first team member we can call X and the number of the card given to the second team member we can call Y. The objective of the game is for the two team members to then turn over their card and write their own 0 or 1 on the back, which we can call what they both write on their cards A and B. When the host collects the cards, he computes X and Y = A xor B, and if the equality holds true, the team scores a point.

    The only kicker is that the team members are not allowed to talk to one another, they have to come up with their strategy beforehand. I would challenge you to write out a table and try to think of a strategy that will always work. You will find that it is impossible to score a point better than 75% of the time if the team members cannot communicate, but if they can, you can score a point 100% of the time. If the team members were given statistically correlated qubits at the beginning of the round and disallowed from communicating, they could actually make use of interference effects to score a point ~85% of the time. They can perform better than should be physically possible in a classical probability theory without being able to communicate despite the fact they are not communicating. The No-communication Theorem proves that there is no communication between entangled qubits, these effects come from interference.

    While you can build a quantum computer using electron spin as you mentioned, it doesn’t have to be. There are many different technologies that operate differently. All that you need is something which can exhibit these quantum interference effects, something that can only be accurately predicted using these complex-valued probabilities. Electron spin is what people often first think of because it is simple to comprehend. Electrons can only have two spin values of up or down, which you can map to 0 and 1, and you can directly measure it using a Stern-Garlach apparatus. This just makes electron spin simple as a way to explain how quantum computing works to people, but they definitely do not all operate on electron spin. Some operate on photon polarization for example. Some operate on the motion of Ytterbium ions trapped in an electromagnetic field.

    It’s kind of like how you can implement bits using different voltage levels where 0v = 0 or 3.3v = 1, or how you can implement bits using the direction of magnetic polarization on a spinning platter in a hard drive whereby polarization in one direction = 0 and polarization in the opposite direction = 1. There are many different ways of physically implementing a bit. Similarly, there are many different ways of implementing a qubit. It also needs at minimum two discrete states to assign to 0 or 1, but on top of this it needs to follow the rules of quantum probability theory and not classical probability theory.