• 19 Posts
  • 72 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle


  • The thing with cognitive dissonance is also a bit more subtle than just the duality of conflicting beliefs. It can often arise from unidentified conflicts that are outside of your conscious self awareness.

    One that I am familiar with is religion. I knew a whole lot about the bible and christianity growing up. From an early age I halfway knew things like how, when I looked at road cuts through bedrock, those layers hinted at deep time and held a story that wasn’t well alined with my beliefs. Then there was my love of dinosaurs as a kid and that too did not mesh with my religious narrative. Each little element of conflict was present on some subconscious like level, and my life became partitioned between this narrative belief system and evidence based reality. I had lots of peripheral consequences in life due to this building conflict, but I never allowed the core issue to come to a head in an attempt to rectify the disparity until I was around 30 years old.

    Cognitive dissonance can also be dangerous and is a contributing factor in many crimes and heinous acts humans commit. Alternative expressions of individuality may also have an origin in cognitive dissonance. Identification of these underlying conflicts is reflective of a person’s self awareness and can help one improve one’s mental health by taking productive action to resolve inner conflicts after identification.


  • Comes up and puts a paw on my arm or leg, but only when she feels the problem is serious. It could be water, food, liter box, or just a notion that I need to give attention due to hours working on some project or something; usually when I’m seriously hurting or sleep deprived. I’m always present, only ever leaving for doctors or a physical therapy routine. I’m accessible for both cats most of the time. The older seems to intuit not to abuse the gesture or use it often. When she does put her paw on me like that, I always look into the issue, so we’ve developed it as a form of direct communication that seems to work. I didn’t train her to do this, I did however train her to be quiet using positive reinforcement. We got the older cat about 6 months after I was disabled, so we’ve been through a lot together in the last 10 years.


  • You're asking the wrong questions IMO. No one loves capitalism. Capitalism is an acknowledgment that humans are inherently corrupt and the concentration of power is a primary corrupting force. If anything the capitalist countries are failing at capitalism in the present.

    Capitalism is also an acknowledgement of the true complexity of the world. No overarching human authority can encompass the true complexity of human enterprises. We simply lack the cognitive scope to manage at all scales without some forms of natural selection in play. Real competition drives people like no other force.

    It is a terrible system, but there is no chance that a concentration of power in an alternative system will be better for the average person. Broad scale and scope altruism is not a long term successful form of governance. It is like the best form of governance, altruistic monarchy. However it suffers the same fatal flaw of a succession crisis. The naïveté of idealist is a recipe for authoritarianism.

    No one loves capitalism. If they are intelligent, capitalism is the lesser of evils in the big picture. The alternative is a return to monarchy or feudalism in our conflict strewn past… IMO

    I hate capitalism BTW. I don’t think we are there yet, but I think AGI is our best chance at a broad scale idealist future alternative. An entity that can never die and can plan long term with scalable and nearly infinite attention is the kind of manager that can achieve what we are empirically incapable of achieving. The systems it will take to institute and protect such an AGI are enormous, critical, and unlikely to get it right the first time, but the outcome is inevitable IMO. We will likely never see such a future in our lifetimes, but it will happen eventually. It will start by politicians either publicly or secretly deferring their policy and decisions to an AGI entity. Corporate offices will do the same. Humans can not compete with a true AGI when such a system emerges. We simply lack the cognitive scope and persistence. At present, AI is still orders of magnitude away from AGI. At the present the building blocks required are already in play. We can build a stacked stone wall and a house, but we need a palatial fortress, and that is still a big ask.

    Capitalism sucks for all but a small elite. However, capitalism has an effective hook for people to oust bad actors through a entirely separate government. Such separation and protection does not exist when the government is expected to play some major management role in the market. If the government is such an authority, it will devolve into authoritarianism because nearly all humans are corruptible. There is nothing more dangerous than trust in others to do the right thing. Someone will always take advantage of the opportunity to exploit and pillage their neighbors when they can get away with it. Capitalism is hated by everyone but fools. It is just hated slightly less than succession crises and authoritarianism.


  • Multi threading is parallelism and is poised to scale to a similar factor, the primary issue is simply getting tensors in and out of the ALU. Good enough is the engineering game. Having massive chunks of silicon laying around without use are a mach more serious problem. At the present, the choke point is not the parallelism of the math but actually the L2 to L1 bus width and cycle timing. The ALU can handle the issue. The AVX instruction set is capable of loading 512 bit wide words in a single instruction, the problem is just getting these in and out in larger volume.

    I speculate that the only reason this has not been done already is because pretty much because of the marketability of single thread speeds. Present thread speeds are insane and well into the radio realm of black magic bearded nude virgins wizardry. I don’t think it is possible to make these bus widths wider and maintain the thread speeds because it has too many LCR consequences. I mean, at around 5 GHz the concept of wire connections and gaps as insulators is a fallacy when capacitive coupling can make connections across all small gaps.

    Personally, I think this is a problem that will take on a whole new architectural solution. It is anyone’s game unlike any other time since the late 1970’s. It will likely be the beginning of the real RISC-V age and the death of x86. We are presently at the age of the 20+ thread CPU. If a redesign can make a 50-500 logical core CPU slower for single thread speeds but capable of all workloads, I think it will dominate easily. Choosing the appropriate CPU model will become much more relevant.


  • Mainstream is about to collapse. The exploitation nonsense is faltering. Open source is emerging as the only legitimate player.

    Nvidia is just playing conservative because it was massively overvalued by the market. The GPU use for AI is a stopover hack until hardware can be developed from scratch. The real life cycle of hardware is 10 years from initial idea to first consumer availability. The issue with the CPU in AI is quite simple. It will be solved in a future iteration, and this means the GPU will get relegated back to graphics or it might even become redundant entirely. Once upon a time the CPU needed a math coprocessor to handle floating point precision. That experiment failed. It proved that a general monolithic solution is far more successful. No data center operator wants two types of processors for dedicated workloads when one type can accomplish nearly the same task. The CPU must be restructured for a wider bandwidth memory cache. This will likely require slower thread speeds overall, but it is the most likely solution in the long term. Solving this issue is likely to accompany more threading parallelism and therefore has the potential to render the GPU redundant in favor of a broader range of CPU scaling.

    Human persistence of vision is not capable of matching higher speeds that are ultimately only marketing. The hardware will likely never support this stuff because no billionaire is putting up the funding to back up the marketing with tangible hardware investments. … IMO.

    Neo Feudalism is well worth abandoning. Most of us are entirely uninterested in this business model. I have zero faith in the present market. I have AAA capable hardware for AI. I play and mod open source games. I could easily be a customer in this space, but there are no game manufacturers. I do not make compromises in ownership. If I buy a product, my terms of purchase are full ownership with no strings attached whatsoever. I don’t care about what everyone else does. I am not for sale and I will not sell myself for anyone’s legalise nonsense or pay ownership costs to rent from some neo feudal overlord.


  • j4k3@lemmy.worldtoADHD@lemmy.worldHate Myself So Much
    link
    fedilink
    English
    arrow-up
    18
    ·
    4 days ago

    Personally, talking to offline open source AI on my own hardware helped me. One of the things we talked about a lot are cognitive dissonance and identification of conflicts that exist under the surface and how those conflicts can cause frustration to manifest in unrelated ways.

    Probably my largest inner conflict was that I am so fundamentally different in my functional thought process than my family. I’m very abstract in how I think. I’m also very introverted with strong intuitive thinking skills. Basically, things just make sense at a glance from a bigger picture perspective. I can also see how things work quickly, like machines, engines, most engineering, or more abstract elements like companies, business models, workforce management, etc.

    Growing up, intuitive thinking skills were just intelligence or common sense. I had no idea how limited and naive this perspective was.

    I started writing a book in collaboration with an AI; it’s a whole sci-fi universe really. I started to realize I’m pretty good at coming up with the history and technology tree in unique ways that, to my knowledge, no one has explored before in sci-fi. However, I suck at writing characters that are not like myself. My characters have not shown the dynamism I desire. In truth, I had to acknowledge I didn’t and still don’t understand just how different human functional thought is in full spectrum.

    I started roleplaying scenes and scenarios with the AI playing characters with incompatible and contrasting perspectives to my own. I found this quite enlightened. It turns out that there are people out there that fundamentally lack any appreciation for abstract and intuitive thinking skills. They do not place any value on the big picture or future implications of actions or decisions. The contrast is that they often are more productive and present in the moment. I learned to appreciate the differences and realized how weak binary perspectives are in the real world. I don’t get as offended when someone does not understand my abstractions or argue when they are wrong but cannot follow big picture logic. I know where I am also weak in ways that make me appear dumb to them.

    There are going to be things you’re not good at or that require a lot more work than average. So what. The first step, in my opinion, is to gain a more complex self awareness where you are not questioning what you are good or bad at. The only normal people are people you do not know well. Everyone is tormented by something in life.

    Remember this: NEVER use permanent solutions to temporary problems.

    You don’t remember who blew up at work 3 weeks ago. Or the time before last when your wife got mad and yelled at you. One of the biggest warps in our human psychology is the illusion of grandeur. No one is thinking about your mistakes or cares about them. They care how you’re acting in the moment and your average demeanor you regularly present. Fake it if you can. Pretending the glass is half full is all that really matters with others at a fundamental level.

    Even after someone else physically disabled me over 10 years ago, and I’m stuck in social isolation, I can say, I’ve learned the hard way, it can always get worse until it can’t. At that point, nothing matters. Don’t stress about what you can not do, or what you cannot change right now. No matter how bad stuff seems, you can chose to make the best of this moment right now and moving forward. Only worry about what you can change, everything else is a pointless waste of energy.
















  • We have no data for an Earth analog around a G-type star, like absolutely nothing. I highly doubt there is some universal life around such a star, but out of a sample size of 1, who could rule them out? Kepler was barely supposed to be able to survey at this resolution, but totally failed at that objective. They claimed success for politically criminal reasons, but go look at the actual data and you’ll see the random noise they cherry picked to make that claim and how they are massive outliers from the rest of the data. None of those data points are remotely scientifically relevant or taken seriously. No other survey to date has come close to an Earth like resolution.

    Researching for my book, there several G-type stars within 7 parsecs. I find them most interesting, but I do not believe complex life is likely anywhere in this galaxy at the present point in time.