insaan@leftopia.orgtoMemes@lemmy.ml•The Greatest Lie about the Red Scare is that it Ended
1·
3 days agodeleted by creator
deleted by creator
Strong “I can fix him” energy out there.
the argument that they can’t learn doesn’t make sense because models have definitely become better.
They have to be either trained with new data or their internal structure has to be improved. It’s an offline process, meaning they don’t learn through chat sessions we have with them (if you open a new session it will have forgotten what you told it in a previous session), and they can’t learn through any kind of self-directed research process like a human can.
all of your shortcomings you’ve listed humans are guilty of too.
LLMs are sophisticated word generators. They don’t think or understand in any way, full stop. This is really important to understand about them.
That’s the People’s Toothbrush!