• 0 Posts
  • 58 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2024

help-circle
  • bunchberry@lemmy.worldtoComic Strips@lemmy.worldTime
    link
    fedilink
    arrow-up
    1
    ·
    edit-2
    1 day ago

    Many Worlds is a rather bizarre interpretation.

    \1) Even the creator of MWI, Hugh Everett, agreed that wavefunction is relative and wrote a paper on that, but then he also claims there is a “universal” wavefunction. That makes about as much sense as saying there is a “universal velocity” in Galilean relativity. There is never a mathematical justification for how there can possibly be a universal wavefunction. It is just asserted that there is. It does not fall out of QM naturally, a theory which only deals with relative wavefunctions.

    This paper shows some technical arguments for the impossibility of a universal wavefunction:

    \2) The EPR paper proves that the statistical predictions of QM violate causal locality (although not relativistic locality), and MWI proponents claim they can get around this by assuming that the statistical predictions, given by the Born rule, are just a subjective illusion. But this makes no sense. A subjective illusion still arises somehow, it still needs a physical explanation, and any attempt to give a physical explanation must necessarily reproduce Born rule probabilities, which as Einstein already proved, violate causal locality. Some try to redefine locality to be in terms of relativistic locality (no-communication), but even Copenhagen is local in that sense!

    These papers show how interpretations like MWI simply cannot be compatible with causal locality:

    \3) MWI proponents also forget that nobody on earth has ever seen a wavefunction. The wavefunction is just a mathematical tool used to predict the behavior of particles with definite values. The Born rule wasn’t added for fun. Einstein had lamented at how if you evolve a radioactive atom according to the Schrodinger equation, it never at any point evolves into anything that looks like decay or no-decay. The evolved wavefunction is very different than anything we have actually ever observed, and you only can tie it back to what we observe with the Born rule, which then converts the wavefunction into a probability distribution of decay or no-decay.

    If you throw out the Born rule, then you are thus left with a mathematical description of the universe which has no relationship to anything we ever observe or can ever observe. This lecture below explains this problem in more detail:



  • The reason quantum computers are theoretically faster is because of the non-separable nature of quantum systems.

    Imagine you have a classical computer where some logic gates flip bits randomly, and multi-bit logic gates could flip them randomly but in a correlated way. These kinds of computers exist and are called probabilistic computers and you can represent all the bits using a vector and the logic gates with matrices called stochastic matrices.

    The vector necessarily is non-separable, meaning, you cannot get the right predictions if you describe the statistics of the computer with a vector assigned to each p-bit separately, but must assign a single vector to all p-bits taken together. This is because the statistics can become correlated with each other, i.e. the statistics of one p-bit depends upon another, and thus if you describe them using separate vectors you will lose information about the correlations between the p-bits.

    The p-bit vector grows in complexity exponentially as you add more p-bits to the system (complexity = 2^N where N is the number of p-bits), even though the total states of all the p-bits only grows linearly (complexity = 2N). The reason for this is purely an epistemic one. The physical system only grows in complexity linearly, but because we are ignorant of the actual state of the system (2N), we have to consider all possible configurations of the system (2^N) over an infinite number of experiments.

    The exponential complexity arises from considering what physicists call an “ensemble” of individual systems. We are not considering the state of the physical system as it currently exists right now (which only has a complexity of 2N) precisely because we do not know the values of the p-bits, but we are instead considering a statistical distribution which represents repeating the same experiment an infinite number of times and distributing the results, and in such an ensemble the system would take every possible path and thus the ensemble has far more complexity (2^N).

    This is a classical computer with p-bits. What about a quantum computer with q-bits? It turns out that you can represent all of quantum mechanics simply by allowing probability theory to have negative numbers. If you introduce negative numbers, you get what are called quasi-probabilities, and this is enough to reproduce the logic of quantum mechanics.

    You can imagine that quantum computers consist of q-bits that can be either 0 or 1 and logic gates that randomly flip their states, but rather than representing the q-bit in terms of the probability of being 0 or 1, you can represent the qubit with four numbers, the first two associated with its probability of being 0 (summing them together gives you the real probability of 0) and the second two associated with its probability of being 1 (summing them together gives you the real probability of 1).

    Like normal probability theory, the numbers have to all add up to 1, being 100%, but because you have two numbers assigned to each state, you can have some quasi-probabilities be negative while the whole thing still adds up to 100%. (Note: we use two numbers instead of one to describe each state with quasi-probabilities because otherwise the introduction of negative numbers would break L1 normalization, which is a crucial feature to probability theory.)

    Indeed, with that simple modification, the rest of the theory just becomes normal probability theory, and you can do everything you would normally do in normal classical probability theory, such as build probability trees and whatever to predict the behavior of the system.

    However, this is where it gets interesting.

    As we said before, the exponential complexity of classical probability is assumed to merely something epistemic because we are considering an ensemble of systems, even though the physical system in reality only has linear complexity. Yet, it is possible to prove that the exponential complexity of a quasi-probabilistic system cannot be treated as epistemic. There is no classical system with linear complexity where an ensemble of that system will give you quasi-probabilistic behavior.

    As you add more q-bits to a quantum computer, its complexity grows exponentially in a way that is irreducible to linear complexity. In order for a classical computer to keep up, every time an additional q-bit is added, if you want to simulate it on a classical computer, you have to increase the number of bits in a way that grows exponentially. Even after 300 q-bits, that means the complexity would be 2^N = 2^300, which means the number of bits you would need to simulate it would exceed the number of atoms in the observable universe.

    This is what I mean by quantum systems being inherently “non-separable.” You cannot take an exponentially complex quantum system and imagine it as separable into an ensemble of many individual linearly complex systems. Even if it turns out that quantum mechanics is not fundamental and there are deeper deterministic dynamics, the deeper deterministic dynamics must still have exponential complexity for the physical state of the system.

    In practice, this increase in complexity does not mean you can always solve problems faster. The system might be more complex, but it requires clever algorithms to figure out how to actually translate that into problem solving, and currently there are only a handful of known algorithms you can significantly speed up with quantum computers.

    For reference: https://arxiv.org/abs/0711.4770


  • If you have a very noisy quantum communication channel, you can combine a second algorithm called quantum distillation with quantum teleportation to effectively bypass the quantum communication channel and send a qubit over a classical communication channel. That is the main utility I see for it. Basically, very useful for transmitting qubits over a noisy quantum network.


  • The people who named it “quantum teleportation” had in mind Star Trek teleporters which work by “scanning” the object, destroying it, and then beaming the scanned information to another location where it is then reconstructed.

    Quantum teleportation is basically an algorithm that performs a destructive measurement (kind of like “scanning”) of the quantum state of one qubit and then sends the information over a classical communication channel (could even be a beam if you wanted) to another party which can then use that information to reconstruct the quantum state on another qubit.

    The point is that there is still the “beaming” step, i.e. you still have to send the measurement information over a classical channel, which cannot exceed the speed of light.




  • I tend to agree with people like Wittgenstein, Bohm, Engels, and Benoist, that identities are ultimately socially constructed. Aristotle believed identifies are physically real, so that a tree or a ship physically has an identity of “tree” or a “ship.” But then naturally you run into the Ship of Theseus paradox, but many other kinds of paradoxes of the same sort like Water-H2O paradox or the teletransportation paradox, where it becomes ambiguous as to when this physical identity would actually come into existence and when it goes away.

    The authors that I cited basically argue that identities are all socially constructed. “Things” don’t actually have physical existence. They are human creations.

    One analogy I like to make is that they’re kind of like a trend line on a graph. Technically, the trend line doesn’t add any new information, it just provides a simplified visual representation of the overall data trend of the data, but all that information is already held within the original dataset.

    Human brains have limited processing capacity. We cannot hold all of nature in our head at once, so we simplify it down to simplified representations of overall patterns that are relevant and important to us. We might call that rough collection of stuff over there a “tree” or a “ship.” The label “tree” or “ship” represents an overly simplified concept of some relevant properties of interest about that stuff over there, but if you go analyze that stuff very closely, you may find that the label actually is rather ambiguous and doesn’t capture the fully complexities of that stuff.

    Indeed, if we could somehow hold all of nature in our heads simultaneously, we would not need to divide the world into “things” at all. We would just fully comprehend how it all interacts as a single woven unified whole, and the introduction of any “thing,” any identity, would just be redundant information.

    Indeed, to some extent, it has always been both necessary and proper for man, in his thinking, to divide things up, and to separate them, so as to reduce his problems to manageable proportions; for evidently, if in our practical technical work we tried to deal with the whole of reality all at once, we would be swamped…However, when this mode of thought is applied more broadly…then man ceases to regard the resulting divisions as merely useful or convenient and begins to see and experience himself and his world as actually constituted of separately existent fragments…fragmentation is continually being brought about by the almost universal habit of taking the content of our thought for ‘a description of the world as it is’. Or we could say that, in this habit, our thought is regarded as in direct correspondence with objective reality. Since our thought is pervaded with differences and distinctions, it follows that such a habit leads us to look on these as real divisions, so that the world is then seen and experienced as actually broken up into fragments.

    — David Bohm, “Wholeness and the Implicate Order”


  • I mean, not really. You just said it wouldn’t have made a difference if I voted for Stein vs Hillary.

    I am not talking about “you.” I have no idea who you are. I am talking about your strategy. People followed your strategy and it got us into the bind we are currently in.

    I got several lifelong Republicans to change their voter registration to Democrat to vote for Bernie. I do not endorse voting 3rd party anymore. I used to think you should vote your conscience no matter what. I believed in a free and fair democracy. Now I think you should strategize even if the democrats suck too.

    And people have followed your strategy and it failed. It turns out that having no principles and endorsing warmongering fascists who promise nothing to their constituents will just make you lose the election.

    The point of voting third party isn’t even necessarily some expectation that the third party wins, but to pressure the Democrats to actually stand for something so that the Democrats will win. Your strategy of “vote blue no matter who” always leads to the Democrats not only losing, but also shifting farther to the right before losing because they have no incentive to have any populist principles, and then this hands the reigns into an even farther right Republican administration.

    Again, Trump’s victory is a result of your strategy. This administration is yours. Own it.

    People did not vote third party in this election in significant numbers enough for it to sway the election. People did not do so last election either. People have been consistently following the “vote blue no matter who” for decades now, with the last time there was any serious backing for a third party was back in the 2000 election.

    I do take responsibility for being a dumb young 20s person falling for what I now think was Russian propaganda

    Russiagate in 2025 🤦‍♀️

    Yeah, I’m sure it was those few hundred dollars in ads on Facebook and not open support for an industrial scale holocaust, an out of control cost of living crisis, shifting towards being jingoistic and running on wanting to massively ramp up military spending and constantly attacking Trump from the right saying he isn’t hawking on the border enough.

    Yeah yeah, I’m sure none of those had to do with people not being very inspired to vote for the Democrats. Democrats aren’t popular because of a few Facebook ads, not because they’ve done anything wrong! Sure sure.

    but Trump winning a second time had nothing to do with me.

    You take no responsibility for anything. You just constantly want to shift the blame elsewhere. Your failures are all secretly the fault of Russia or of other poor and struggling Americans. The problem is never you or your beloved fascist politicians. You do not have the level of maturity needed to admit when your strategy has failed.

    That’s on people who didn’t vote at all in 2024.

    It is the responsibility of a party to have a strategy to attract voters. Saying “we should have a strategy that appeals to no one” and then losing the election and blaming people for not voting for you just reveals that you, at the end of the day, genuinely do not care about the outcome of the election. This is why you try to shift the blame of your own failures onto other poor and struggling Americans.

    I want the Democrats to appeal to non-voters so they will actually win. You want the Democrats to appeal to no one and then just shame non-voters as bad people for not mindlessly your beloved fascist politician, because you ultimately, at the end of the day, don’t actually even care if the Democrats win or lose as politics is just a game to you. I want a strategy that actually wins elections. Your strategy has been the dominant one for decades now. As the old saying goes, insanity is trying the same thing over and over again and expecting different results.




  • Capitalist oligarchs are the ones who rule society, and so if there are problems in society, the fault ultimately comes back to them as they are the rulers. They, however, will never admit responsibility to anything, and so they will always seek to shift to blame to other people, but they are the ones who rule, so their blame must be shifted to the non-rulers, i.e. to regular people. Shifting the blame to all of regular people would be vastly unpopular, and so they instead pick out a subset of regular people to blame. Whether it is Jews, Somalis, transpeople, immigrants, etc, it is always the fault of some minority group of people who have no political power, and it is never the fault of those who control everything and are in the position of power to make all the decisions.



  • bunchberry@lemmy.worldtoMemes@lemmy.mlVictims of Communism
    link
    fedilink
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    26 days ago

    It is the academic consensus even among western scholars that the Ukrainian famine was indeed a famine, not an intentional genocide. This is not my opinion, but, again, the overwhelming consensus even among the most anti-communist historians like Robert Conquest who described himself as a “cold warrior.” The leading western scholar on this issue, Stephen Wheatcroft, discussed the history of this in western academia in a paper I will link below.

    He discusses how there was strong debate over it being a genocide in western academia up until the Soviet Union collapsed and the Soviet archives were open. When the archives were open, many historians expected to find a “smoking gun” showing that the Soviets deliberately had a policy of starving the Ukrainians, but such a thing was never found and so even the most hardened anti-communist historians were forced to change their tune (and indeed you can find many documents showing the Soviets ordering food to Ukraine such as this one and this one).

    Wheatcroft considers Conquest changing his opinion as marking an end to that “era” in academia, but he also mentions that very recently there has been a revival of the claims of “genocide,” but these are clearly motivated and pushed by the Ukrainian state for political reasons and not academic reasons. It is literally a propaganda move. There are hostilities between the current Ukrainian state and the current Russian state, and so the current Ukrainian state has a vested interest in painting the Russian state poorly, and so reviving this old myth is good for its propaganda. But it is just that, state propaganda.

    Discussions in the popular narrative of famine have changed over the years. During Soviet times there was a contrast between ‘man-made’ famine and ‘denial of famine’.‘Man-made’ at this time largely meant as a result of policy. Then there was a contrast between ‘man-made on purpose’, and ‘man-made by accident’ with charges of criminal neglect and cover up. This stage seemed to have ended in 2004 when Robert Conquest agreed that the famine was not man-made on purpose. But in the following ten years there has been a revival of the ‘man-made on purpose’ side. This reflects both a reduced interest in understanding the economic history, and increased attempts by the Ukrainian government to classify the ‘famine as a genocide’. It is time to return to paying more attention to economic explanations.

    https://www.researchgate.net/publication/326562364


  • EPR proves quantum mechanics violates locality without hidden variables, and Bell proves quantum mechanics violates locality with hidden variables, and so locality is not salvageable. People who claim quantum mechanics without hidden variables can be local tend to redefine locality to just be about superluminal signaling, but you can have nonlocal effects that cannot be used to signal. It is this broader definition of locality that is the concern of the EPR paper.

    When Einstein wrote locality, he didn’t mention anything about signaling, that was not in his head. He was thinking in more broad terms. We can summarize Einstein’s definition of locality as follows:

    (P1) Objects within set A interact such that their values are changed to become set A’. (P2) We form prediction P by predicting the values of A’ while preconditioning on complete knowledge of A. (P3) We form prediction Q by predicting the values of A’ while preconditioning on complete knowledge of A as well as object x where x⊄A. (D) A physical model is local if the variance of P equals the variance of Q.

    Basically, what this definition says is that if particles interact and you want to predict the outcome of that interaction, complete knowledge of the initial values of the particles directly participating in the interaction should give you the best prediction possible to predict the outcome of the interaction, and no knowledge from anything outside the interaction should improve your prediction. If knowledge from some particle not participating in the interaction allows you to improve your prediction, then the outcome of the interaction has irreducible dependence upon something that did not locally participate in the interaction, which is of course nonlocal.

    The EPR paper proves that, without hidden variables, you necessarily violate this definition of locality. I am not the only one to point this out. Local no-hidden variable models are impossible. Yes, this also applies to Many Worlds. There is no singular “Many Worlds” interpretation because no one agrees on how the branching should work, but it is not hard to prove that any possible answer to the question of how the branching should work must be nonlocal, or else it would fail to reproduce the predictions of quantum theory.

    Pilot wave theory does not respect locality, but neither does orthodox quantum mechanics.

    The fear of developing nonlocal hidden variable models also turn out to be unfounded. The main fear is that a nonlocal hidden variable model might lead to superluminal signaling, which would lead to a breakdown in the causal order, which would make the theory incompatible with special relativity, which would in turn make it unable to reproduce the predictions of quantum field theory.

    It turns out, however, that none of these fears are well-founded. Pilot wave theory itself is proof that you can have a nonlocal hidden variable model without superluminal signaling. You do not end up with a breakdown in the causal order if you introduce a foliation in spacetime.

    Technically, yes, this does mean it deviates from special relativity, but it turns out that this does not matter, because the only reason people care for special relativity is to reproduce the predictions of quantum field theory. Quantum field theory makes the same predictions in all reference frames, so you only need to match QFT’s predictions for a single reference frame and choose that frame as your foliation, and then pilot wave theory can reproduce the predictions of QFT.

    There is a good paper below that discusses this, how it is actually quite trivial to match QFT’s predictions with pilot wave theory.

    tldr: Quantum mechanics itself does not respect locality, hidden variables or not, and adding hidden variables does not introduce any problems with reproducing the predictions of quantum field theory.


  • I’ve used LLMs quite a few times to find partial derivatives / gradient functions for me, and I know it’s correct because I plug them into a gradient descent algorithm and it works. I would never trust anything an LLM gives blindly no matter how advanced it is, but in this particular case I could actually test the output since it’s something I was implementing in an algorithm, so if it didn’t work I would know immediately.



  • Putting aside the fact that you cannot “experimentally prove” anything as proof is for mathematics, claiming you can experimentally demonstrate fundamental uncertainty is, to put it bluntly, incoherent. Uncertainty is a negative, it is a statement that there is no underlying cause for something. You cannot empirically demonstrate the absence of an unknown cause.

    If you believe in fundamental uncertainty, it would be appropriate to argue in favor of this using something like the principle of parsimony, pointing out the fact that we have no evidence for an underlying cause so we shouldn’t believe in one. Claiming that you have “proven” there is no underlying cause is backwards logic. It is like saying you have proven there is no god as opposed to simply saying you lack belief in one. Whatever “proof” you come up with to rule out a particular god, someone could change the definition of that god to get around your “proof.”

    Einstein, of course, was fully aware of such arguments and acknowledged such a possibility that there may be no cause, but he put forwards his own arguments as to why it leads to logical absurdities to treat the randomness of quantum mechanics as fundamental; it’s not merely a problem of randomness, but he showed with a thought experiment involving atomic decay that it forces you to have to reject the very existence of a perspective-independent reality.

    There is no academic consensus on how to address Einstein’s arguments, and so to claim he’s been “proven wrong” is quite a wild claim to make.

    “[W]hat is proved by impossibility proofs is lack of imagination.” (John Bell)