AI: thoughts, experiences, predictions

Anything About Anything...
User avatar
sinewav
Graphic Artist
Posts: 6514
Joined: Wed Jan 23, 2008 3:37 am
Contact:

Re: AI: thoughts, experiences, predictions

Post by sinewav »

sinewav wrote: Fri Aug 08, 2025 5:57 pm...I'm looking for an alternative and I'll post about that outcome later.
I've currently settled on Claude. One my areas on interest is trust in AI systems. Over the last year I've read about Anthropic's leadership and they seem like good enough people... mostly. I've also been reading Jack Clark's substack for several months and it's really insightful, naturally because I'm really into policy stuff. Anthropic seems to put ethics at the front of their work (though it's probably just a front). On the other hand, their partnering with military and surveillance are terrible things but I guess this is inevitable? Of course, ethics isn't the only dimension in my decision, but it does help with trust. The Anthropic website also has some basic AI courses and I thought those were helpful. I'm still a low usage user so I don't have a strong attachment to Claude, but it seems good enough.

I'm fairly future-thinking. Just like I'm of that small percentage of Linux users I imagine I'll eventually settle on AI tools that fit my sensibility even if they are less powerful and less popular. I think it's too early to know what that equivalent is.
User avatar
Z-Man
God & Project Admin
Posts: 11763
Joined: Sun Jan 23, 2005 6:01 pm
Location: Cologne
Contact:

Re: AI: thoughts, experiences, predictions

Post by Z-Man »

sinewav wrote: Tue Sep 30, 2025 1:01 am I'm of that small percentage of Linux users
You mean, the small percentage of people who know they are using Linux :) Servers and phones aside, almost all TVs nowadays run Linux.
User avatar
sinewav
Graphic Artist
Posts: 6514
Joined: Wed Jan 23, 2008 3:37 am
Contact:

Re: AI: thoughts, experiences, predictions

Post by sinewav »

Sharing a good article on AI hallucination: The real danger of AI hallucination

TL;DR

Hallucination is an artifact the training process and it won't be removed because changing that process would make chatbots far less useful. The author composed a hallucination test that every chatbot failed, though some performed better than others. Prompting techniques can give chatbots the option of not answering if they aren't confident enough, but they may become really quiet and generally unhelpful.

My thoughts: I see more and more articles with different prompting tweaks and at this point it seems that using all these "hacks" or "fixes" really eats up the context window sweet spot. I guess everyone is expected to have pages and pages of text that need to be entered before asking any questions. Definitely feels like the curve of LLM usefulness has flattened out (though the media and coding tools are amazing).
Post Reply