They can try, but its hard to band stuff that’s decentralized like that. All they realistically could do is prevent companies and organizations based in the us from federating, or federating with anything outside.
They can try, but its hard to band stuff that’s decentralized like that. All they realistically could do is prevent companies and organizations based in the us from federating, or federating with anything outside.
Good luck with that.
As a queer person I don’t really care at this point if China or Russia is tracking me. They aren’t the ones who are currently stripping me and others of rights and so many other things.
I don’t trust any governments on this front, but the government I live under is way more of a concern.
If you are blindly asking it questions without a grounding resources you’re gonning to get nonsense eventually unless it’s really simple questions.
They aren’t infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.
Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.
The problem is that people are misusing the technology, not that the tech has no use or merit, even if it’s just from an academic perspective.
There’s something to be said that bitcoin and other crypto like it have no intrinsic value but can represent value we give and be used as a decentralized form of currency not controlled by one entity. It’s not how it’s used, but there’s an argument for it.
NFTs were a shitty cash grab because showing you have the token that you “own” a thing, regardless of what it is, only matters if there is some kind of enforcement. It had nothing to do with rights for property and anyone could copy your crappy generated image as many times as they wanted. You can’t do that with bitcoin.
Ah, good old “both sides” argument. Half the reason we are in this mess.
I’ve no love lost for cops, but I also have no sympathy left in me while I wonder if I’m going to be able to keep my job, access healthcare, and generally exist as myself in society without someone decided to attack me. Fuck him, fuck the cop. There are no innocents in this story.
So I’m going to take some catharsis in that one of the people who would likely have murdered me or anyone like me in the future is gone.
After a week of bad news after bad news where I am both fearful for my job and very right to exist in the country, quite frankly, I couldn’t care less about your moral grandstanding. This guy tried to overthrow democracy for a man who is currently doing a speed-run of fascism.
He should still be rotting in prison, instead a criminal was released into the streets by the party of “law and order” for political reasons. And while I have no love lost for cops, this guy getting shot is anything but a tragedy.
Fuck him, fuck Trump, fuck republicans, and fuck anyone who has sympathy for these monsters. These people want me dead for trying to be comfortable in my own skin. I will wish the worst on every last one of them because I know what they want to do to me and I’m tired of people using kid gloves when talking about these people.
I’m tired of this uninformed take.
LLMs are not a magical box you can ask anything of and get answers. If you are lucky and blindly asking questions it can give some accurate general data, but just like how human brains work you aren’t going to be able to accurately recreate random trivia verbatim from a neural net.
What LLMs are useful for, and how they should be used, is a non-deterministic parsing context tool. When people talk about feeding it more data they think of how these things are trained. But you also need to give it grounding context outside of what the prompt is. give it a PDF manual, website link, documentation, whatever and it will use that as context for what you ask it. You can even set it to link to reference.
You still have to know enough to be able to validate the information it is giving you, but that’s the case with any tool. You need to know how to use it.
As for the spyware part, that only matters if you are using the hosted instances they provide. Even for OpenAI stuff you can run the models locally with opensource software and maintain control over all the data you feed it. As far as I have found, none of the models you run with Ollama or other local AI software have been caught pushing data to a remote server, at least using open source software.
Which is actually something Deepseek is able to do.
Even if it can still generate garbage when used incorrectly like all of them, it’s still impressive that it will tell you it doesn’t “know” something, but can try to help if you give it more context. which is how this stuff should be used anyway.
Just because people are misusing tech they know nothing about does not mean this isn’t an impressive feat.
If you know what you are doing, and enough to know when it gives you garbage, LLMs are really useful, but part of using them correctly is giving them grounding context outside of just blindly asking questions.
That, and they are just brute forcing the problem. Neural nets have been around for ever but it’s only been the last 5 or so years they could do anything. There’s been little to no real breakthrough innovation as they just keep throwing more processing power at it with more inputs, more layers, more nodes, more links, more CUDA.
And their chasing a general AI is just the short sighted nature of them wanting to replace workers with something they don’t have to pay and won’t argue about it’s rights.
Been playing around with local LLMs lately, and even with it’s issues, Deepseek certainly seems to just generally work better than other models I’ve tried. It’s similar hit or miss when not given any context beyond the prompt, but with context it certainly seems to both outperform larger models and organize information better. And watching the r1 model work is impressive.
Honestly, regardless of what someone might think of China and various issues there, I think this is showing how much the approach to AI in the west has been hamstrung by people looking for a quick buck.
In the US, it’s a bunch of assholes basically only wanting to replace workers with AI they don’t have to pay, regardless of the work needed. They are shoehorning LLMs into everything even when it doesn’t make sense to. It’s all done strictly as a for-profit enterprise by exploiting user data and they boot-strapped by training on creative works they had no rights to.
I can only imagine how much of a demoralizing effect that can have on the actual researchers and other people who are capable of developing this technology. It’s not being created to make anyone’s lives better, it’s being created specifically to line the pockets of obscenely wealthy people. Because of this, people passionate about the tech might decide not to go into the field and limit the ability to innovate.
And then there’s the “want results now” where rather than take the time to find a better way to build and train these models they are just throwing processing power at it. “needs more CUDA” has been the mindset and in the western AI community you are basically laughed at if you can’t or don’t want to use Nvidia for anything neural net related.
Then you have Deepseek which seems to be developed by a group of passionate researchers who actually want to discover what is possible and more efficient ways to do things. Compounded by sanctions preventing them from using CUDA, restrictions in resources have always been a major cause for a lot of technical innovations. There may be a bit of “own the west” there, sure, but that isn’t opposed to the research.
LLMs are just another tool for people to use, and I don’t fault a hammer that is used incorrectly or to harm someone else. This tech isn’t going away, but there is certainly a bubble in the west as companies put blind trust in LLMs with no real oversight. There needs to be regulation on how these things are used for profit and what they are trained on from a privacy and ownership perspective.
Honestly, even from the beginning it’s pretty obvious scraped data is going to have a ton of issues. There’s too much nonsense out there, both from misinformation and people just not able to communicate.
That’s before you get into the ethical aspects of stealing other people’s content and the way these things are being misused.
Yeah, I have an issue of detail and such and I’ve had a dnd/tabletop world I want to flesh out and eventually dm, but suck at some details or linking things I want to do together.
Been slowly making a base of material for it and plan to eventually use various LLMs to link things and flesh out the world, taking whatever it gives me as a base to work off of for those parts.
Most people don’t understand history. Anything trained on that is goanna struggle too.
As a queer person I’m being very careful about what I say in various spaces right now given the current context. Thinking about replacing accounts that are more tied to me and making some.
Also thinking to use local LLMs to rephrase what I post so writing pattern detection won’t work.
I haven’t done much with UI in general, but the one time I thought of making some UI stuff in windows I gave up.
Even modifying an existing .net program someone else made for a feature I wanted was a nightmare.
YouTube is a bit of an issue as unlike the others there’s a lot of content and information you can’t get elsewhere.
I’ve been considering using a proxy to scrape and download subscriptions and add them to a personal server. Probably not practical to do for everything though with how much space that would be.
At the very least see if there’s a wrapper that can strip out a lot of content and just show the stuff I want to watch VS all the nonsense they fire at you.
Even using LLMs isn’t an issue, it’s just another tool. I’ve been messing around with local stuff and while you certainly have to use it knowing it’s limitations it can help for certain things, even if just helping parse data or rephrasing things.
The issue with neural nets is that while it theoretically can do “anything”, it can’t actually do everything.
And it’s the same with a lot of tools like this. People not understanding the limitations or flaws and corporations wanting to use it to replace workers.
There’s also the tech bros who feel that creative works can be generated completely by AI because like AI they don’t understand art or storytelling.
But we also have others who don’t understand what AI is and how broad it is, thinking it’s only LLMs and other neural nets that are just used to produce garbage.
The Nazis were also extremely incompetent when I came to stuff like this. Hitler had generals scrambling behind his back to produce their best weapon, but he kept finding out and making them stop along with tons of other micromanaging.
Trump is an idiot and so is musk. Most business people are short sighted and fascists even more. They also may be high on their own farts and think the system won’t collapse with them “in charge” like a toddler left in s room full of candles and napalm.
Not saying that foreign powers aren’t loving this, but they don’t have to have control over it.