• 0 Posts
  • 27 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2024

help-circle
  • Interesting you get downvoted for this when I mocked someone for saying the opposite who claimed that $0.5m was some enormous amount of money we shouldn’t be wasting, and I simply pointed out that we waste literally billions around the world on endless wars killing random people for now reason, so it is silly to come after small bean quantum computing if budgeting is your actual concern. People seemed to really hate me for saying that, or maybe it was because they just actually like wasting moneys on bombs to drop on children and so they want to cut everything but that.


  • I wouldn’t, I’d just live there. Get to know the people and culture, get married, grow to old age and die. Just like almost everyone there, and most people in any country. I’d survive just like I’d survive in any other country: go to work every day to get income needed to eat, repeat the process ad infinitum until my body withers away from old age.


  • bunchberry@lemmy.world
    cake
    toMemes@lemmy.mlForgot the disclaimer
    link
    fedilink
    arrow-up
    1
    arrow-down
    3
    ·
    edit-2
    7 months ago

    Ah yes, crying about “privilege” while you’re here demanding that people shouldn’t speak out against a literal modern day holocaust at the only time when they have the political power to make some sort of difference. Yeah, it’s totally those people who are “privileged” and not your white pasty ass who doesn’t have to worry about their extended family being slaughtered.


  • bunchberry@lemmy.world
    cake
    toMemes@lemmy.mlForgot the disclaimer
    link
    fedilink
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    7 months ago

    Good. That’s when Democrats should be criticized the most, because that is the only time you have the power to exercise any leverage over them. Why would you refuse to criticize them when you actually have a tiny bit of leverage and wait until you have no power at all and your criticism is completely irrelevant and will be ignored? That is just someone who wants to complain but doesn’t actually want anything to change.


  • We don’t know what it is. We don’t know how it works. That is why

    If you cannot tell me what you are even talking about then you cannot say “we don’t know how it works,” because you have not defined what “it” even is. It would be like saying we don’t know how florgleblorp works. All humans possess florgleblorp and we won’t be able to create AGI until we figure out florgleblorp, then I ask wtf is florgleblorp and you tell me “I can’t tell you because we’re still trying to figure out what it is.”

    You’re completely correct. But you’ve gone on a very long rant to largely agree with the person you’re arguing against.

    If you agree with me why do you disagree with me?

    Consciousness is poorly defined and a “buzzword” largely because we don’t have a fucking clue where it comes from, how it operates, and how it grows.

    You cannot say we do not know where it comes from if “it” does not refer to anything because you have not defined it! There is no “it” here, “it” is a placeholder for something you have not actually defined and has no meaning. You cannot say we don’t know how “it” operates or how “it” grows when “it” doesn’t refer to anything.

    When or if we ever define that properly

    No, that is your first step, you have to define it properly to make any claims about it, or else all your claims are meaningless. You are arguing about the nature of florgleblorp but then cannot tell me what florgleblorp is, so it is meaningless.

    This is why “consciousness” is interchangeable with vague words like “soul.” They cannot be concretely defined in a way where we can actually look at what they are, so they’re largely irrelevant. When we talk about more concrete things like intelligence, problem-solving capabilities, self-reflection, etc, we can at least come to some loose agreement of what that looks like and can begin to have a conversation of what tests might actually look like and how we might quantify it, and it is these concrete things which have thus been the basis of study and research and we’ve been gradually increasing our understanding of intelligent systems as shown with the explosion of AI, albeit it still has miles to go.

    However, when we talk about “consciousness,” it is just meaningless and plays no role in any of the progress actually being made, because nobody can actually give even the loosest iota of a hint of what it might possibly look like. It’s not defined, so it’s not meaningful. You have to at least specify what you are even talking about for us to even begin to study it. We don’t have to know the entire inner workings of a frog to be able to begin a study on frogs, but we damn well need to be able to identify something as a frog prior to studying it, or else we would have no idea that the thing we are studying is actually a frog.

    You cannot study anything without being able to identify it, which requires defining it at least concretely enough that we can agree if it is there or not, and that the thing we are studying is actually the thing we aim to study. We should I believe your florgleblorp, sorry, I mean “consciousness” you speak of, even exists if you cannot even tell me how to identify it? It would be like if someone insisted there is a florgleblorp hiding in my room. Well, I cannot distinguish between a room with or without a florgleblorp, so by Occam’s razor I opt to disbelieve in its existence. Similarly, if you cannot tell me how to distinguish between something that possesses this “consciousness” and something that does not, how to actually identify it in reality, then by Occam’s razor I opt to disbelieve in its existence.

    It is entirely backwards and spiritualist thinking that is popularized by all the mystics to insist that we need to study something they cannot even specify what it is first in order to figure out what it is later. That is the complete reversal of how anything works and is routinely used by charlatans to justify pseudoscientific “research.” You have to specify what it is being talked about first.


  • we need to figure out what consciousness is

    Nah, “consciousness” is just a buzzword with no concrete meaning. The path to AGI has no relevance to it at all. Even if we develop a machine just as intelligent as human beings, maybe even moreso, that can solve any arbitrary problem just as efficiently, mystics will still be arguing over whether or not it has “consciousness.”

    Edit: You can downvote if you want, but I notice none of you have any actual response to it, because you ultimately know it is correct. Keep downvoting, but not a single one of you will actually reply and tell us me how we could concretely distinguish between something that is “conscious” and something that isn’t.

    Even if we construct a robot that fully can replicate all behaviors of a human, you will still be there debating over whether or not is “conscious” because you have not actually given it a concrete meaning so that we can identify if something actually has it or not. It’s just a placeholder for vague mysticism, like “spirit” or “soul.”

    I recall a talk from Daniel Dennett where he discussed an old popular movement called the “vitalists.” The vitalists used “life” in a very vague meaningless way as well, they would insist that even if understand how living things work mechanically and could reproduce it, it would still not be considered “alive” because we don’t understand the “vital spark” that actually makes it “alive.” It would just be an imitation of a living thing without the vital spark.

    The vitalists refused to ever concretely define what the vital spark even was, it was just a placeholder for something vague and mysterious. As we understood more about how life works, vitalists where taken less and less serious, until eventually becoming largely fringe. People who talk about “consciousness” are also going to become fringe as we continue to understand neuroscience and intelligence, if scientific progress continues, that is. Although this will be a very long-term process, maybe taking centuries.



  • The space mechanics was definitely one of the great things about that game, in my opinion. Most space games when you land you just press a button and it plays an animation. Having to land manually with a landing camera is very satisfying. When you crash and parts of your ship break and you have to float outside to fix it, that was also very fun. I feel like a lot of space games are a bit lazy about the actual space mechanics, this game did it very well.


  • Complex numbers are just a way of representing an additional degree of freedom in an equation. You have to represent complex numbers not on a number line but on the complex plane, so each complex number is associated with two numbers. That means if you create a function that requires two inputs and two outputs, you could “compress” that function into a single input and output by using complex numbers.

    Complex numbers are used all throughout classical mechanics. Waves are two-dimensional objects because they both have an amplitude and a wavelength. Classical wave dynamics thus very often use complex numbers because you can capture the properties of waves more concisely. An example of this is the Fourier transform. If you look up the function, it looks very scary, it has an integral and Euler’s number raised to the negative power of the imaginary number multiplied by pi. However, if you’ve worked with complex numbers a lot, you’d immediately recognize that raising Euler’s number to pi times the imaginary number is just how you represent rotations on the complex plane.

    Despite how scary the Fourier transform looks, literally all it is actually doing is wrapping a wave around a circle. 3Blue1Brown has a good video on his channel of how to visualize the Fourier transform. The Fourier transform, again, isn’t inherently anything quantum mechanical, we use it all the time in classical mechanics, for example, if you ever used an old dial-up model and wondered why it made those weird noises, it was encoding data as sound wave by representing them as different harmonic waves that it would then add together, producing that sound. The Fourier transform could then be used by the modem at the other end to break the sound back apart into those harmonic waves and then decode it back into data.

    In quantum mechanics, properties of systems always have an additional kind of “orientation” to them. When particles interact, if their orientations are aligned, the outcome of the interaction is deterministic. If they are misaligned, then it introduces randomness. For example, an electron’s spin state can either be up or down. However, its spin state also has a particular orientation to it, so you can only measure it “correctly” by having the orientation of the measuring device aligned with the electron. If they are misaligned, you introduce randomness. These orientations often are associated with physical rotations, for example, with the electron spins state, you measure it with something known as a Stern-Gerlach apparatus, and to measure the electron on a different orientation you have to physically rotate the whole apparatus.

    Because the probability of measuring certain things directly relates to the relative orientation between your measuring device and the particle, it would be nice if we had a way to represent both the relative orientation and the probability at the same time. And, of course, you guessed it, we do. It turns out you can achieve this simply by representing your probability amplitudes (the % chance of something occurring) as complex numbers. This means in quantum mechanics, for example, an event can have a -70.7i% chance of occurring.

    While that sounds weird at first, you quickly realize that the only reason we represent it this way is because it directly connects the relative orientation between the systems interacting and the probabilities of certain outcomes. You see, you can convert quantum probabilities to classical just by computing the distance from 0% on the complex plane and squaring it, which in the case of -70.7i% would give you 50%, which tells you this just means it is basically a fair coin flip. However, you can also compute from this number the relative orientation of the two measuring devices, which in this case you would find it to be rotated 90 degrees. Hence, because both values can be computed from the same number, if you rotate the measuring device it must necessarily alter the probabilities of different outcomes.

    You technically don’t need to ever use complex numbers. You could, for example, take the Schrodinger equation and just break it up into two separate equations for the real and imaginary part, and have them both act on real numbers. Indeed, if you actually build a quantum computer simulator in a classical computer, most programming languages don’t include complex numbers, so all your algorithms have to break the complex numbers into two real numbers. It’s just when you are writing down these equations, they can get very messy this way. Complex numbers are just far more concise to represent additional degrees of freedom without needing additional equations/functions.


  • bunchberry@lemmy.world
    cake
    toComic Strips@lemmy.worldStereotyping
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    edit-2
    8 months ago

    ngl I blame physicists who communicate to the public for this

    Notice how you always see a lot of nonsense mysticism around quantum mechanics like “quantum healing” but you never see anything along the lines of like “general relativity healing” or “inflation theory healing.”

    The difference is that often it is the physicists themselves who choose to communicate to the public who paint quantum mechanics in a mystical light. Indeed, this is not even something unique to the physicists who communicate to the public, you can sometimes even run into it in peer-reviewed publications painting QM as a theory that somehow puts conscious observers front and center and questions the existence of objective reality, or whatever rubbish philosophy people try to imbue onto some linear algebra.

    The ones who communicate to the public just are often worse because they don’t tell you QM as it really is, they usually tell you some personal theory they have. For example, rather than just describing how QM works, one of these science communicators might tell you their personal theory about how there’s a grand multiverse, or that “consciousness” plays some sort of role, and that explains why QM works. They do not just present the theory, but their own personal speculation as an underlying explanation for it.

    Because physicists themselves promote all this mysticism around a bunch of linear algebra, you end up with mystics and charlatans who realize that they can take advantage of this by talking about mystical nonsense like “quantum healing.” Sure, it might be nonsensical rubbish, but the person who hears about “quantum healing” also heard a real PhD physicist tell them about multiverses and “consciousness,” so they think there must be something to it as well. It gives the mysticism an air of legitimacy.

    We like to kid ourselves that the mysticism is just promoted by your Deepak Chopra types or laymen who have no idea what they’re talking about. But if you actually look at what a real academic philosophy department publishes, there is mysticism all throughout academic philosophy. These philosophers have also had a big impact on physicists, who often adopt these mystical attitudes they learn from the philosophy department into their own discussion, and sometimes even into their own publications.

    If you actually talk to the laymen who are deeply enthralled by those quantum mystic pseudoscience charlatans, they usually can point you to multiple real academics who back their beliefs, people with legitimate credentials. This is a problem nobody seems to address and it annoys the hell out of me. Everyone paints either the charlatans or the laymen as the bad guy here, but nobody wants to talk about the elephant in the room which is the rampant mysticism in academia.

    I literally argued with a PhD physicist the other day who was going around preaching to people that quantum mechanics proves that there is no physical reality and we all live inside of a “cosmic consciousness.” I did not get very far with him because he just insulted me and pointed to academic philosophers who agreed with him and said I’m stupid for even questioning his claims, and then wouldn’t address my criticisms.



  • Honestly, the random number generation on quantum computers is practically useless. Speeds will not get anywhere near as close to a pseudorandom number generator, and there are very simple ones you can implement that are blazing fast, far faster than any quantum computer will spit out, and produce numbers that are widely considered in the industry to be cryptographically secure. You can use AES for example as a PRNG and most modern CPUs like x86 processor have hardware-level AES implementation. This is why modern computers allow you to encrypt your drive, because you can have like a file that is a terabyte big that is encrypted but your CPU can decrypt it as fast as it takes for the window to pop up after you double-click it.

    While PRNG does require an entropy pool, the entropy pool does not need to be large, you can spit out terabytes of cryptographically secure pseudorandom numbers on a fraction of a kilobyte of entropy data, and again, most modern CPUs actually include instructions to grab this entropy data, such as Intel’s CPUs have an RDSEED instruction which let you grab thermal noise from the CPU. In order to avoid someone discovering a potential exploit, most modern OSes will mix into this pool other sources as well, like fluctuations in fan voltage.

    Indeed, used to with Linux, you had a separate way to read random numbers directly from the entropy pool and another way to read pseudorandom numbers, those being /dev/random and /dev/urandom. If you read from the entropy pool, if it ran out, the program would freeze until it could collect more, so some old Linux programs you would see the program freeze until you did things like move your mouse around.

    But you don’t see this anymore because generating enormous amounts of cryptographysically secure random nubmers is so easy with modern algorithms that modern Linux just collects a little bit of entropy at boot and it uses that to generate all pseudorandom numbers after, and just got rid of needing to read it directly, both /dev/random and /dev/urandom now just internally in the OS have the same behavior. Any time your PC needs a random number it just pulls from the pseudorandom number generator that was configured at boot, and you have just from the short window of collecting entropy data at boot the ability to generate sufficient pseudorandom numbers basically forever, and these are the numbers used for any cryptographic application you may choose to run.

    The point of all this is to just say random number generation is genuinely a solved problem, people don’t get just how easy it is to basically produce practically infinite cryptographically secure pseudorandom numbers. While on paper quantum computers are “more secure” because their random numbers would be truly random, in practice you literally would never notice a difference. If you gave two PhD mathematicians or statisticians the same message, one encrypted using a quantum random number generator and one encrypted with a PRNG like AES or ChaCha20, and asked them to decipher them, they would not be able to decipher either. In fact, I doubt they would even be able to identify which one was even encoded using the quantum random number generator. A string of random numbers looks just as “random” to any random number test suite whether or not it came from a QRNG or a high-quality PRNG (usually called CSPRNG).

    I do think at least on paper quantum computers could be a big deal if the engineering challenge can ever be overcome, but quantum cryptography such as “the quantum internet” are largely a scam. All the cryptographic aspects of quantum computers are practically the same, if not worse, than traditional cryptography, with only theoretical benefits that are technically there on paper but nobody would ever notice in practice.


  • the study that found the universe is not locally real. Things only happen once they are observed

    This is only true if you operate under a very specific and strict criterion of “realism” known as metaphysical realism. Einstein put forward a criterion of what he thought this philosophy implied for a physical theory, and his criterion is sometimes called scientific realism.

    Metaphysical realism is a very complex philosophy. One of its premises is that there exists an “absolute” reality where all objects are made up of properties that are independent of perspective. Everything we perceive is wholly dependent upon perspective, so metaphysical realism claims that what we perceive is not “true” reality but sort of an illusion created by the brain. “True” reality is then treated as the absolute spacetime filled with particles captured in the mathematics of Newton’s theory.

    The reason it relies on this premise is because by assigning objects perspective invariant properties, then they can continue to exist even if no other object is interacting with them, or, more specifically, they continue to exist even if “no one is looking at them.” For example, if you fire a cannonball from point A to point B, and you only observe it leaving point A and arriving at point B, Newtonian mechanics allows you to “track” its path between these two points even if you did not observe it.

    The problem is that you cannot do this in quantum mechanics. If you fire a photon from point A to point B, the theory simply disallows you from unambiguously filling in the “gaps” between the two points. People then declare that “realism is dead,” but this is a bit misleading because this is really only a problem for metaphysical/scientific realism. There are many other kinds of realism in literature.

    For example, the philosopher Jocelyn Benoist’s contextual realism argues that the exact opposite. The mathematical theory is not “true reality” but is instead a description of reality. A description of reality is not the same as reality. Would a description of the Eiffel Tower substitute actually seeing it in reality? Of course not, they’re not the same. Contextual realism instead argues that what is real is not the mathematical description but is precisely what we perceive. The reason we perceive reality in a way that depends upon perspective is because reality is just relative (or “contextual”). There is no “absolute” reality but only a contextual reality and that contextual reality we perceive directly as it really is.

    Thus for contextual realism, there is no issue with the fact that we cannot “track” things unambiguously, because it has no attachment to treating particles as if they persist as autonomous entities. It is perfectly fine with just treating it as if the particle hops from point A to point B according to some predictable laws and relative to the context in which the observer occupies. That is just how objective reality works. Observation isn’t important, and indeed, not even measurement, because whatever you observe in the experimental setting is just what reality is like in that context. The only thing that “arises” is your identification.


  • Why did physicists start using the word “real” and “realism”? It’s a philosophical term, not a physical one, and it leads to a lot of confusion. “Local” has a clear physical meaning, “realism” gets confusing. I have seen some papers that use “realism” in a way that has a clear physical definition, such as one I came across defined it in terms of a hidden variable theory. Yet, I also saw a paper coauthored by the great Anton Zeilinger that speaks of “local realism,” but very explicitly uses “realism” with its philosophical meaning, that there is an objective reality independent of the observer, which to me it is absurd to pretend that physics in any way calls this into account.

    If you read John Bell’s original paper “On the Einstein Podolsky Rosen Paradox,” he never once use the term “realism.” The only time I have seen “real” used at all in this early discourse is in the original EPR paper, but this was merely a “criterion” (meaning a minimum but not sufficient condition) for what would constitute a theory that is a complete description of reality. Einstein/Podolsky/Rosen in no way presented this as a definition of “reality” or a kind of “realism.”

    Indeed, even using the term “realism” on its own is ambiguous, as there are many kinds of “realisms” in the literature. The phrase “local realism” on its own is bound to lead to confusion, and it does, because I pointed out, even in the published literature physicists do not always use “realism” consistently. If you are going to talk about “realism,” you need to preface it to be clear what kind of realism you are specifically talking about.

    If the reason physicists started to talk about “realism” is because they specifically are referring to something that includes the EPR criterion, then they should call it “EPR realism” or something like that. Just saying “realism” is so absurdly ridiculous it is almost as if they are intentionally trying to cause confusion. I don’t really blame anyone who gets confused on this because like I said if you even read the literature there is not even consistent usage in the peer-reviewed papers.

    The phrase “observer-dependence” is also very popular in the published literature. So, while I am not disagreeing with you that “observation” is just an interaction, this is actually a rather uncommon position known as relational quantum mechanics.


  • A lot of people who present quantum mechanics to a laymen audience seem to intentionally present it to be as confusing as possible because they like the “mystery” behind it. Yet, it is also easy to present it in a trivially simple and boring way that is easy to understand.

    Here, I will tell you a simple framework that is just 3 rules and if you keep them in mind then literally everything in quantum mechanics makes sense and follows quite simply.

    1. Quantum mechanics is a probabilistic theory where, unlike classical probability theory, the probabilities of events can be complex-valued. For example, it is meaningful in quantum mechanics for an event to have something like a -70.7i% chance of occurring.
    2. The physical interpretation of complex-valued probabilities is that the further the probability is from zero, the more likely it is. For example, an event with a -70.7i% probability of occurring is more likely than one with a 50% probability of occurring because it is further from zero. (You can convert quantum probabilities to classical just by computing their square magnitudes, which is known as the Born rule.)
    3. If two events or more become statistically correlated with one another (this is known as “entanglement”) the rules of quantum mechanics disallows you from assigning quantum probabilities to the individual systems taken separately. You can only assign the quantum probabilities to the two events or more taken together. (The only way to recover the individual probabilities is to do something called a partial trace to compute the reduced density matrix.)

    If you keep those three principles in mind, then everything in quantum mechanics follows directly, every “paradox” is resolved, there is no confusion about anything.

    For example, why is it that people say quantum mechanics is fundamentally random? Well, because if the universe is deterministic, then all outcomes have either a 0% or 100% probability, and all other probabilities are simply due to ignorance (what is called “epistemic”). Notice how 0% and 100% have no negative or imaginary terms. They thus could not give rise to quantum effects.

    These quantum effects are interference effects. You see, if probabilities are only between 0% and 100% then they can only be cumulative. However, if they can be negative, then the probabilities of events can cancel each other out and you get no outcome at all. This is called destructive interference and is unique to quantum mechanics. Interference effects like this could not be observed in a deterministic universe because, in reality, no event could have a negative chance of occurring (because, again, in a deterministic universe, the only possible probabilities are 0% or 100%).

    If we look at the double-slit experiment, people then ask why does the interference pattern seem to go away when you measure which path the photon took. Well, if you keep this in mind, it’s simple. There’s two reasons actually and it depends upon perspective.

    If you are the person conducting the experiment, when you measure the photon, it’s impossible to measure half a photon. It’s either there or it’s not, so 0% or 100%. You thus force it into a definite state, which again, these are deterministic probabilities (no negative or imaginary terms), and thus it loses its ability to interfere with itself.

    Now, let’s say you have an outside observer who doesn’t see your measurement results. For him, it’s still probabilistic since he has no idea which path it took. Yet, the whole point of a measuring device is to become statistically correlated with what you are measuring. So if we go to rule #3, the measuring device should be entangled with the particle, and so we cannot apply the quantum probabilities to the particle itself, but only to both the particle and measuring device taken together.

    Hence, for the outside observer’s perspective, only the particle and measuring device collectively could exhibit quantum interference. Yet, only the particle passes through the two slits on its own, without the measuring device. Thus, they too would predict it would not interfere with itself.

    Just keep these three rules in mind and you basically “get” quantum mechanics. All the other fluff you hear is people attempting to make it sound more mystical than it actually is, such as by interpreting the probability distribution as a literal physical entity, or even going more bonkers and calling it a grand multiverse, and then debating over the nature of this entity they entirely made up.

    It’s literally just statistics with some slightly different rules.


  • I am factually correct, I am not here to “debate,” I am telling you how the theory works. When two systems interact such that they become statistically correlated with one another and knowing the state of one tells you the state of the other, it is no longer valid to assign a state vector to the system subsystems that are part of the interaction individually, you have to assign it to the system as a whole. When you do a partial trace on the system individually to get a reduced density matrix for the two systems, if they are perfectly entangled, then you end with a density matrix without coherence terms and thus without interference effects.

    This is absolutely entanglement, this is what entanglement is. I am not misunderstanding what entanglement is, if you think what I have described here is not entanglement but a superposition of states then you don’t know what a superposition of states is. Yes, an entangled state would be in a superposition of states, but it would be a superposition of states which can only be applied to both correlated systems together and not to the individual subsystems.

    Let’s say R = 1/sqrt(2) and Alice sends Bob a qubit. If the qubit has a probability of 1 of being the value 1 and Alice applies the Hadamard gate, it changes to R probability of being 0 and -R probability of being 1. In this state, if Bob were to apply a second Hadamard gate, then it undoes the first Hadamard gate and so it would have a probability of 1 of being a value of 1 due to interference effects.

    However, if an eavesdropper, let’s call them Eve, measures the qubit in transit, because R and -R are equal distances from the origin, it would have an equal chance of being 0 or 1. Let’s say it’s 1. From their point of view, they would then update their probability distribution to be a probability of 1 of being the value 1 and send it off to Bob. When Bob applies the second Hadamard gate, it would then have a probability of R for being 0 and a probability of -R for being 1, and thus what should’ve been deterministic is now random noise for Bob.

    Yet, this description only works from Eve’s point of view. From Alice and Bob’s point of view, neither of them measured the particle in transit, so when Bob received it, it still is probabilistic with an equal chance of being 0 and 1. So why does Bob still predict that interference effects will be lost if it is still probabilistic for him?

    Because when Eve interacts with the qubit, from Alice and Bob’s perspective, it is no longer valid to assign a state vector to the qubit on its own. Eve and the qubit become correlated with one another. For Eve to know the particle’s state, there has to be some correlation between something in Eve’s brain (or, more directly, her measuring device) and the state of the particle. They are thus entangled with one another and Alice and Bob would have to assign the state vector to Eve and the qubit taken together and not to the individual parts.

    Eve and the qubit taken together would have a probability distribution of R for the qubit being 0 and Eve knowing the qubit is 0, and a probability of -R of the qubit being 1 and Eve knowing the qubit is 1. There is still interference effects but only of the whole system taken together. Yet, Bob does not receive Eve and the qubit taken together. He receives only the qubit, so this probability distribution is no longer applicable to the qubit.

    He instead has to do a partial trace to trace out (ignore) Eve from the equation to know how his qubit alone would behave. When he does this, he finds that the probability distribution has changed to 0.5 for 0 and 0.5 for 1. In the density matrix representation, you will see that the density matrix has all zeroes for the coherences. This is a classical probability distribution, something that cannot exhibit interference effects.

    Bob simply cannot explain why his qubit loses its interference effects by Eve measuring it without Bob taking into account entanglement, at least within the framework of quantum theory. That is just how the theory works. The explanation from Eve’s perspective simply does not work for Bob in quantum mechanics. Reducing the state vector simultaneously between two different perspectives is known as an objective collapse model and makes different statistical predictions than quantum mechanics. It would not merely be an alternative interpretation but an alternative theory.

    Eve explains the loss of coherence due to her reducing the state vector due to seeing a definite outcome for the qubit, and Bob explains the loss of coherence due to Eve becoming entangled with the qubit which leads to decoherence as doing a partial trace to trace out (ignore) Eve gives a reduced density matrix for the qubit whereby the coherence terms are zero.



  • Personally, I think there is a much bigger issue with the quantum internet that is often not discussed and it’s not just noise.

    Imagine, for example, I were to offer you two algorithms. One can encrypt things so well that it would take a hundred trillion years for even a superadvanced quantum computer to break the encryption, and it almost has no overhead. The other is truly unbreakable even in an infinite amount of time, but it has a huge amount of overhead to the point that it will cut your bandwidth in half.

    Which would you pick?

    In practice, there is no difference between an algorithm that cannot be broken for trillions of years, and an algorithm that cannot be broken at all. But, in practice, cutting your internet bandwidth in half is a massive downside. The tradeoff just isn’t worth it.

    All quantum “internet” algorithms suffer from this problem. There is always some massive practical tradeoff for a purely theoretical benefit. Even if we make it perfectly noise-free and entirely solve the noise problem, there would still be no practical reason at all to adopt the quantum internet.


  • The problem with the one-time pads is that they’re also the most inefficient cipher. If we switched to them for internet communication (ceteris paribus), it would basically cut internet bandwidth in half overnight. Even moreso, it’s a symmetric cipher, and symmetric ciphers cannot be broken by quantum computers. Ciphers like AES256 are considered still quantum-computer-proof. This means that you would be cutting the internet bandwidth in half for purely theoretical benefits that people wouldn’t notice in practice. The only people I could imagine finding this interesting are overly paranoid governments as there are no practical benefits.

    It also really isn’t a selling point for quantum key distribution that it can reliably detect an eavesdropper. Modern cryptography does not care about detecting eavesdroppers. When two people are exchanging keys with a Diffie-Hellman key exchange, eavesdroppers are allowed to eavesdrop all they wish, but they cannot make sense of the data in transit. The problem with quantum key distribution is that it is worse than this, it cannot prevent an eavesdropper from seeing the transmitted key, it just discards it if they do. This to me seems like it would make it a bit harder to scale, although not impossible, because anyone can deny service just by observing the packets of data in transit.

    Although, the bigger issue that nobody seems to talk about is that quantum key distribution, just like the Diffie-Hellman algorithm, is susceptible to a man-in-the-middle attack. Yes, it prevents an eavesdropper between two nodes, but if the eavesdropper sets themselves up as a third node pretending to be different nodes when queried from either end, they could trivially defeat quantum key distribution. Although, Diffie-Hellman is also susceptible to this, so that is not surprising.

    What is surprising is that with Diffie-Hellman (or more commonly its elliptic curve brethren), we solve this using digital signatures which are part of public key infrastructure. With quantum mechanics, however, the only equivalent to digital signatures relies on the No-cloning Theorem. The No-cloning Theorem says if I gave you a qubit and you don’t know it is prepared, nothing you can do to it can tell you its quantum state, which requires knowledge of how it was prepared. You can use the fact only a single person can be aware of its quantum state as a form of a digital signature.

    The thing is, however, the No-cloning Theorem only holds true for a single qubit. If I prepared a million qubits all the same way and handed them to you, you could derive its quantum state by doing different measurements on each qubit. Even though you could use this for digital signatures, those digital signatures would have to be disposable. If you made too many copies of them, they could be reverse-engineered. This presents a problem for using them as part of public key infrastructure as public key infrastructure requires those keys to be, well, public, meaning anyone can take a copy, and so infinite copy-ability is a requirement.

    This makes quantum key distribution only reliable if you combine it with quantum digital signatures, but when you do that, it no longer becomes possible to scale it to some sort of “quantum internet.” It, again, might be something useful an overly paranoid government could use internally as part of their own small-scale intranet, but it would just be too impractical without any noticeable benefits for anyone outside of that. As, again, all this is for purely theoretical benefits, not anything you’d notice in the real world, as things like AES256 are already considered uncrackable in practice.


  • Entanglement plays a key role.

    Any time you talk about “measurement” this is just observation, and the result of an observation is to reduce the state vector, which is just a list of complex-valued probability amplitudes. The fact they are complex numbers gives rise to interference effects. When the eavesdropper observes definite outcome, you no longer need to treat it as probabilistic anymore, you can therefore reduce the state vector by updating your probabilities to simply 100% for the outcome you saw. The number 100% has no negative or imaginary components, and so it cannot exhibit interference effects.

    It is this loss of interference which is ultimately detectable on the other end. If you apply a Hadamard gate to a qubit, you get a state vector that represents equal probabilities for 0 or 1, but in a way that could exhibit interference with later interactions. Such as, if you applied a second Hadamard gate, it would return to its original state due to interference. If you had a qubit that was prepared with a 50% probability of being 0 or 1 but without interference terms (coherences), then applying a second Hadamard gate would not return it to its original state but instead just give you a random output.

    Hence, if qubits have undergone decoherence, i.e., if they have lost their ability to interfere with themselves, this is detectable. Obvious example is the double-slit experiment, you get real distinct outcomes by a change in the pattern on the screen if the photons can interfere with themselves or if they cannot. Quantum key distribution detects if an observer made a measurement in transit by relying on decoherence. Half the qubits a Hadamard gate is randomly applied, half they are not, and which it is applied to and which it is not is not revealed until after the communication is complete. If the recipient receives a qubit that had a Hadamard gate applied to it, they have to apply it again themselves to cancel it out, but they don’t know which ones they need to apply it to until the full qubits are transmitted and this is revealed.

    That means at random, half they receive they need to just read as-is, and another half they need to rely on interference effects to move them back into their original state. Any person who intercepts this by measuring it would cause it to decohere by their measurement and thus when the recipient applies the Hadamard gate a second time to cancel out the first, they get random noise rather than it actually cancelling it out. The recipient receiving random noise when they should be getting definite values is how you detect if there is an eavesdropper.

    What does this have to do with entanglement? If we just talk about “measuring a state” then quantum mechanics would be a rather paradoxical and inconsistent theory. If the eavesdropper measured the state and updated the probability distribution to 100% and thus destroyed its interference effects, the non-eavesdroppers did not measure the state, so it should still be probabilistic, and at face value, this seems to imply it should still exhibit interference effects from the non-eavesdroppers’ perspective.

    A popular way to get around this is to claim that the act of measurement is something “special” which always destroys the quantum probabilities and forces it into a definite state. That means the moment the eavesdropper makes the measurement, it takes on a definite value for all observers, and from the non-eavesdroppers’ perspective, they only describe it still as probabilistic due to their ignorance of the outcome. At that point, it would have a definite value, but they just don’t know what it is.

    However, if you believe that, then that is not quantum mechanics and in fact makes entirely different statistical predictions to quantum mechanics. In quantum mechanics, if two systems interact, they become entangled with one another. They still exhibit interference effects as a whole as an entangled system. There is no “special” interaction, such as a measurement, which forces a definite outcome. Indeed, if you try to introduce a “special” interaction, you get different statistical predictions than quantum mechanics actually makes.

    This is because in quantum mechanics, every interaction leads to growing the scale of entanglement, and so the interference effects never go away, just spread out. If you introduce a “special” interaction such as a measurement whereby it forces things into a definite value for all observers, then you are inherently suggesting there is a limitation to this scale of entanglement. There is some cut-off point whereby interference effects can no longer be scaled passed that, and because we can detect if a system exhibits interference effects or not (that’s what quantum key distribution is based on), then such an alternative theory (called an objective collapse model) would necessarily have to make differ from quantum mechanics in its numerical predictions.

    The actual answer to this seeming paradox is provided by quantum mechanics itself: entanglement. When the eavesdropper observes the qubit in transit, for the perspective of the non-eavesdroppers, the eavesdropper would become entangled with the qubit. It then no longer becomes valid in quantum mechanics to assign the state vector to the eavesdropper and the qubit separately, but only them together as an entangled system. However, the recipient does not receive both the qubit and the eavesdropper, they only receive the qubit. If they want to know how the qubit behaves, they have to do a partial trace to trace out (ignore) the eavesdropper, and when they do this, they find that the qubit’s state is still probabilistic, but it is a probability distribution with only terms between 0% and 100%, that is to say, no negatives or imaginary components, and thus it cannot exhibit interference effects.

    Quantum key distribution does indeed rely on entanglement as you cannot describe the algorithm consistently from all reference frames (within the framework of quantum mechanics and not implicitly abandoning quantum mechanics for an objective collapse theory) without taking into account entanglement. As I started with, the reduction of the wave function, which is a first-person description of an interaction (when there are 2 systems interacting and one is an observer describing the second), leads to decoherence. The third-person description of an interaction (when there are 3 systems and one is on the “outside” describing the other two systems interacting) is entanglement, and this also leads to decoherence.

    You even say that “measurement changes the state”, but how do you derive that without entanglement? It is entanglement between the eavesdropper and the qubit that leads to a change in the reduced density matrix of the qubit on its own.