- cross-posted to:
- technology@lemmy.zip
- cross-posted to:
- technology@lemmy.zip
The onrushing AI era was supposed to create boom times for great gadgets. Not long ago, analysts were predicting that Apple Intelligence would start a “supercycle” of smartphone upgrades, with tons of new AI features compelling people to buy them. Amazon and Google and others were explaining how their ecosystems of devices would make computing seamless, natural, and personal. Startups were flooding the market with ChatGPT-powered gadgets, so you’d never be out of touch. AI was going to make every gadget great, and every gadget was going to change to embrace the AI world.
This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.
There was just one problem with the whole theory: the tech still doesn’t work. Chatbots may be fun to talk to and an occasionally useful replacement for Google, but truly game-changing virtual assistants are nowhere close to ready. And without them, the gadget revolution we were promised has utterly failed to materialize.
In the meantime, the tech industry allowed itself to be so distracted by these shiny language models that it basically stopped trying to make otherwise good gadgets. Some companies have more or less stopped making new things altogether, waiting for AI to be good enough before it ships. Others have resorted to shipping more iterative, less interesting upgrades because they have run out of ideas other than “put AI in it.” That has made the post-ChatGPT product cycle bland and boring, in a moment that could otherwise have been incredibly exciting. AI isn’t good enough, and it’s dragging everything else down with it.
Archive link: https://archive.ph/spnT6
Just look at how ppl use their smart speakers. They ask it to set timers or ask for the weather. AI will be the norm once the benefit is obvious to everyone. When I can trust my AI with my credit card info and allow it to purchase stuff for me. Right now AI is basically a self-organizing dictionary which is often confidently incorrect. Not once has GPT told me it didn’t know something.
The this isn’t on topic necessarily but if you wanna know what they are betting on for ai look into contentcyborg.ai
They wanna flood the internet with fake people, opinions, engagement etc. This creates a feedback loop of marketing budgets flooding social media for the engagement frenzy and creating ideological Dutch disease where anything will be said for a buck. We’re already there obviously culture wise, but now we’re offshoring fake souls I guess.
This whole promise hinged on the idea that Siri, Alexa, Gemini, ChatGPT, and other chatbots had gotten so good, they’d change how we do everything. Typing and tapping would soon be passé, all replaced by multimodal, omnipresent AI helpers. You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you. Tech companies large and small have been betting on virtual assistants for more than a decade, to little avail. But this new generation of AI was going to change things.
I have never and will never interact with my phone by speaking to it and I don’t want to be around other people who are doing that. The beauty of a touch screen and buttons is you can silently operate the device. Software can always be updated. They should be focusing on hardware features if they want to be innovative. Maybe they could start by adding back some of the shit they’ve removed.
the bar is so low that even a lean secure android OS without bloatware would be revolutionary.
I agree but I suspect that the problem is that people have different opinions on where the line is on that. Presumably somebody, somewhere actually plays that stupid candy crush thing on Windows for example. It’s probably a ‘valuable service’ for it to be pre installed for them.
I kinda hate them but they’re allowed to like it.
I could live with pre installed apps as long as they can be removed… i remember having useless apps like google music, youtube, weird browsers and other random apps that could not be removed, I could only uninstall the updates but the base version would remain… That stuff is predatory if i do not use them why should i be forced to have them on my phone.
You wouldn’t need to do things yourself; you’d just tell your assistant what you need, and it would tap into the whole world of apps and information to do it for you.
Ah, the promise made by every futurist ever.
They’re always wrong. New inventions are used to unemploy people, insert themselves between you and what you want to extract money, or to try to sell you something.
This just reminds me of the blockchain/NFT craze. NFT is stupid as shit but blockchain has its uses, just like LLMs. I refuse to call it AI because it’s not, it’s a language generator. A particularly expensive language generator that cost a lot of in terms of resources but still just a language generator. It’s not all that different from the crypto craze, especially if you want a GPU for other things.
I use LLMs for some things and they are great, but to be honest, I haven’t seen any real world usage of blockchains besides cryptocurrencies and NFTs.
Generations* Let’s not forget we produce 3 or 4 models of phones a year, per manufacturer. That’s an alarming planet amount of E-waste and we don’t have the raw materials to keep up this pace forever.
And I still can’t find a phone that has a replaceable battery, proper IP rating, and doesn’t cost an arm and a leg, alternatively, costs thrice as much as the potato display and CPU would warrant. You can get two of the things, but not all three. I won’t even begin to speak of having an unlocked bootloader, or, while having the rest in place, also a flush camera. FFS I’d be fine with no camera I just don’t want a hump. I’d be fine with 720p, it’s a tiny screen after all, but good contrast and not 8k doesn’t seem to be a thing that companies think anyone would be interested it.
Stop fucking innovating, just apply lessons already learned. Design a phone with the mindset of designing a bottle opener.
Depends on what you mean by forever. Who knows what tomorrow brings. We could be smashed back to the stone age, and effectively extinct, sometime next week.
I would argue that they moved to LLMs because they had run out of ideas on actually improving cellphones. It wasn’t that they were distracted by them. They are trying to distract us because they need to cell new phones every year and nothing they’ve come up with is really justifying shelling out $1200 for a phone that’s virtually the same as the previous 3-5 iterations.
This “new phone every year” is the worst consumer crapfest we have going. AI features feel like clutching at straws when seemingly everyone hates the battery life on every single phone. Slap a larger battery in there? Well now you get shit AI that burns whatever extra capacity was gained. I can’t name a single quality on an iPhone model from the last 6 years that I truly wanted, other than the size of my 13 mini. It works fine and it fits in my pocket. Now make one that stays on for a full 24 hours and doesn’t need a battery replacement every 2 years.
Blame the isheep for purchasing every crap offered.
There are plenty on Android as well and they also existed before smartphones.
Me breathing a sigh of relief for still using my S10.
It makes calls, send texts and I can read Lemmy with the app. What more do I need?
LGV20 gang. I dread the day that my work apps stop working because the android version is too old.
I’m not sure if that’s a typo or brilliant. They need to “cell” new phones every year, indeed.
Celling cell phones is indeed profitable.
I’ve been using a Sunbeam flip phone for a year or so. Paid for the phone up front, and pay $3/mo for use of maps, speech recognition, and continued bugfixes.
Even if phones never got new features, dev time still needs to be committed to security updates, and services (like Siri) need to be paid for. The model of getting 100% of your revenue from new phone sales is starting to break. If I could pay $3/mo for Siri or whatever and never have my phone go obsolete, I think that’d be a good deal.
What the heck are you on about. That’s the worst possible solution to this, are you some sort of masochistic?
If Siri is something that needs to be paid for, don’t bundle it with the system. Charge extra from the start, and people can opt in to that shit.
Also, they run a massively profitable software store, and THAT is what justifies and pays for the bug fixing and security patches to the overall OS.
The “cell a year” practice isn’t to cover development costs, it’s to bring in massive profit by milking the consumeristic herd that buys their crap.
Heh forgot about the App Store.
Maybe a bad example, but there is certainly a trend recently of purpose built hardware with “free” services failing to justify the expenses of the necessary backend infrastructure getting turned into useless landfill.
Car Thing, Facebook Portal, and this dumb little treat dispensing dog webcam that I used to have come to mind.
Everyone hates subscriptions, but when it comes to hardware that needs to generate revenue to function, I think a token dollar or so a month is appropriate.
Edit: also thinking about it more, core OS software features that are arbitrarily linked to new hardware (like Apple Intelligence) are definitely designed to sell more phones over just selling more software on existing phones. I think it’s fair to say that there’s a revenue link there.
It’s more boring than this, I think. The AI fomo is real, so they cram that in rather clumsy and ultimately pointless. But there were so many missed opportunities on Apple and Samsung flagships this year and it boils down to the capitalistic urge to save money and charge customers the same, and having no real competition. OPPO, one plus, vivo all have better devices, but importing them and getting them to work on US carriers is basically not possible. Not to mention the incentives the carriers throw at you to keep you locked in to that manufacturer.
Honestly yeah, none of the crap being made right now is going to appear relevant in the future, just like 3d tvs
3d tvs is my favorite analogy. Easiest way to illustrate the bubble of hype.
That’s the saddest part, I loved my 3dtv until they stopped making media for it. It was a fun gimmick, but I was definitely not “most consumers” lol
I’m curious as to what the opinion of AI will be in 10 years
Blockchain 10 years ago was hyped like AI now.
Blockchain is now used by the US president to make money in barely legal ways.
That’s not a good outlook.
I’m betting the same opinion we have today about 3D TVs
Probably the same as we have now, “be neat if and when it eventually arrives”.
I’ve heard it put very well that AI is either having a Napster moment in which case we will not recognise the world 10 years from now, or it’s having an iPhone moment and it will get marginally better at best but is essentially in it’s final form.
I personally think it’s more like 3D movies and in 20 years when it comes back around we’ll look at this crap like it was Red and Blue glasses.
I think it’s iphone stage. We’ve had predictive text in some form or other for a long time now. But that’s just LLMs. Can’t speak for the image/video generators, but I expect those will become another tool in the box that gets better but does the same thing.
I just can’t see a whole lot of improvement in these products making any changes top how we use them already.
Transformer based LLMs are pretty much at their final form, from a training perspective. But there’s still a lot of juice to be gotten from them through more sophisticated usage, for example the recent “Atom of Thoughts” paper. Simply by directing LLMs in the correct flow, you can get much stronger results with much weaker models.
How long until someone makes a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
… a flow that can reasonably check itself for errors/hallucinations? There’s no fundamental reason why it couldn’t.
Turing Completeness maybe?
When an LLM fabricates a falsehood, that is not a malfunction at all. The machine is doing exactly what it has been designed to do: guess, and sound confident while doing it.
When LLMs get things wrong they aren’t hallucinating. They are bullshitting.
source: https://thebullshitmachines.com/lesson-2-the-nature-of-bullshit/index.html
guess, and sound confident while doing it.
Right, and that goes for the things it gets “correct” as well, right? I think “bullshitting” can give the wrong idea that LLMs are somehow aware of when they don’t know something and can choose to turn on some sort of “bullshitting mode”, when it’s really all just statistical guesswork (plus some preprogrammed algorithms, probably).
Of course, and that’s why they need an anti-bullshit step that doesn’t currently exist. I still believe it’s possible to reign LLMs in, by maximizing their strengths and minimizing their weaknesses.
they need an anti-bullshit step that doesn’t currently exist.
This will never exist in a complete form. Wikipedia doesn’t have this solved; randomly generated heuristics will certainly never have it either.
I’m not sure humans can do it in a complete form. But I believe that is possible to approach human levels of confidence with AI.
I read that as “if you do the thinking for them, LLMs are quite good”
Well that thinking flow can be automated, as far as we have seen. The Chain of Thoughts and Atom of Thoughts paradigms have been very successful and don’t require human intervention to produce improved results.
improved, but still bullshit
Detecting a hallucination programmatically is the hard part. What is truth? Given an arbitrary sentence, how does one accurately measure the truthfulness of it? What about the edge cases, like a statement that is itself true but misrepresents something? Or what if a statement is correct in a specific context, but generally incorrect?
I’m an AI optimist but I don’t see hallucinations being solved completely as long as LLMs are statistical models of languages, but we’ll probably have a set of heuristics and techniques that can catch 90% of them.
I mean, in the end, I think it’s literally an unsolvable problem of intelligence. It’s not like humans don’t “hallucinate” ourselves. Fundamentally your information processing is only as good as the information you get in, and if the information is wrong, you’re going to be wrong. Or even just mistakes. We make mistakes constantly, and we’re the most intelligent beings we know of in the universe.
The question is what issue exactly we’re attempting to solve regarding AI. It’s probably more useful to reframe it as “The AI not lying/giving false information when it should know better/has enough information to know the truth”. Though, even that is a higher bar than we humans set for ourselves
Yeah, like, have you ever met one of those crazy guys who think the pyramids were literally built by aliens? Humans can get caught in a confidently wrong state as well.
We used to call those the AI winters. Barely any progress for years until someone has a great idea and suddenly there is a new form of AI and a new hype cycle again ending I in AI winter.
In a few years, somebody will find a way that leaves LLM in the dust but comes with its own set of limitations.
AI image generation is pretty cool. If it’s used in moderation and as a test bed. It’s a tool, not a complete piece of work imo.
I could see text gen being usful for some things. But i feel like it can very easily and sloppily be a crutch. If it’s used in the same spirit as a spreadsheet I’d feel better about it.
LLMs are just ridiculous to me.
I haven’t gotten anything of use from Apple Intelligence. Even just using it is difficult, and Siri is possibly dumber than she was before.
siri has not been integrated with AI yet. they pushed that to 2026.
Based on what I’ve seen of my partners phone, it provides an assessment of text messages. Why would someone want that?
I’ve used the “writing tools” extensively for minor changes, like changes to capitalization on a large block of text. It makes the phone a little less of a consumption-only device.
I’ve also found the image editing tools handy from time to time, and the automatic calls to ChatGPT on the more complex natural-language questions can sometimes be handy, even if you need to wait a while for the response.
The notification summaries are sometimes very handy and sometimes absurdly incorrect and misleading.
I’m really looking forward to Siri being less frustratingly stupid, but we’ve got a while to wait for that, and we probably shouldn’t set our expectations too high. I do respect that they’ve not shipped it rather than shipping something broken, though.
I just don’t get why they haven’t put AI in the already established ‘assistants’ yet.
Why isn’t siri or google home not integrated? Why make new things instead of improving the tech you already have?
If I had to guess, its either because of branding, or because they know it doesn’t work that well yet. Probably both.
- It doesnt work that well.
- If they do that then they cant trick everyone into buying new devices, thus helping recoup the untold billions dumped into LLM-based content theft.
This has been a huge let down. Thought at the very least home assistants which are marginally useful could become less infuriating with an intelligence boost, but not at all. I’d be happy if I could simply upload a damn 64 Kb thesaurus at this point to my alexa so she would not ignore everything I say if I don’t remember the exact right commands.
Sounds like you should check out Homeassistant.
Home assistant is currently in development for their own prebuilt ‘Alexas’. But they are only in their ‘preview’ stage for selected devs and influencers. No announced target date when it is commercially available.
There is a hat for the raspberry pi that does local voice recognition that you can use with Rhasspy or another setup.
But, I haven’t really messed with it since they started rolling out voice recognition within HA. My plans for a HAL9000 system are on pause until I finish my microchip pet feeding enclosures project.
Yeah maybe. Switching infrastructure would be a headache and expensive though. Last I checked the off the shelf versions which is how I would want to start at least didn’t have wifi capability. Is there a turnkey version that does now?
Yeah, ODROID partnered with them to create an off the shelf product. It’s pretty pricy, though, but honestly you could run it on a pi 3b+ for pretty cheap.
A whole generation of basically disposable devices at that
AI is about as useful as when there was a movement to take away human assistants to troubleshoot issues and replacing them all with centralized hubs. These hubs are built with the assumption that they will answer everything and anything people have a concern with. However, their fundamental flaw is that they don’t cover every base and people are left with limited options. They can forget it and just live with it. They can just go through a few more hoops until they’re talking with a human.
And this kind of over-reliance on AI is what will turn people off from it. I’m seeing AI implemented in places where nobody asked it for it to be implemented in. Whereas, there are missed opportunities for AI to be implemented in but aren’t for some reason.
AI in of itself isn’t an entirely bad thing. It is just once again, another great idea, ruined by blind executors in big tech that just don’t get it.
My iPhone 14 Pro has no AI and still works as wonderfully well as it did the first day I bought it. And I know that on iOS, you can simply disable the AI element.
But, yeah, the “promise of AI” was always bullshit.