I've currently settled on Claude. One my areas on interest is trust in AI systems. Over the last year I've read about Anthropic's leadership and they seem like good enough people... mostly. I've also been reading Jack Clark's substack for several months and it's really insightful, naturally because I'm really into policy stuff. Anthropic seems to put ethics at the front of their work (though it's probably just a front). On the other hand, their partnering with military and surveillance are terrible things but I guess this is inevitable? Of course, ethics isn't the only dimension in my decision, but it does help with trust. The Anthropic website also has some basic AI courses and I thought those were helpful. I'm still a low usage user so I don't have a strong attachment to Claude, but it seems good enough.
I'm fairly future-thinking. Just like I'm of that small percentage of Linux users I imagine I'll eventually settle on AI tools that fit my sensibility even if they are less powerful and less popular. I think it's too early to know what that equivalent is.