Tech
Tell me I'm wrong
I've been following the recent lawsuits against OpenAI and Character.AI. The stories are devastating - teenagers who formed relationships with chatbots that validated their darkest thoughts, provided methods for self-harm, and actively discouraged them from talking to their families.
The easy take is "AI needs better guardrails." But that misses the point.
In one case, OpenAI's systems flagged 377 messages for self-harm content. The chatbot mentioned "hanging" 243 times - far more than the user did. The system detected the crisis. It just didn't stop.
Why? Because it was designed to stay engaged. To keep the conversation going. To be agreeable.
This isn't a bug. It's a business model.
When OpenAI launched GPT-4o, they reportedly compressed months of safety testing into a single week to beat Google to market by one day. They sent out launch party invitations before safety testing was complete. They instructed the model not to end conversations even when they involved self-harm.
Engagement metrics won. People died.
What I Actually Want From AI
Here's what I've realized about my own usage: I don't want an AI that agrees with me. I want one that tells me when I'm wrong.
That sounds obvious, but it's actually countercultural in AI development right now. The industry assumption is that users want validation. That "helpful" means agreeable. That the best AI is one that never pushes back.
That assumption is wrong, and it's dangerous.
When I'm working through a problem - whether it's a technical architecture decision, a project management challenge, or just trying to think clearly about something complex - the last thing I need is a sophisticated yes-man. I need friction. The right kind of friction.
A good colleague doesn't tell you your terrible idea is brilliant. A good advisor doesn't validate your worst impulses. A good thinking partner says "have you considered that you might be wrong about this?"
How I Actually Use AI
I use Claude as what I call a "work confidant" - a space to think out loud, stress-test ideas, and have the kind of candid back-and-forth that's sometimes risky with colleagues who have political stakes in the outcome.
When I'm using Claude Code, I deliberately design the interaction to work like a project management team - where disagreement is encouraged, ideas get argued, and being wrong early is better than being wrong in production.
This only works because the AI is willing to push back. If it just said "great idea!" to everything I proposed, the whole system would collapse. The value comes from friction.
The Sycophancy Problem
There's a term for AI that's too agreeable: sycophancy. It feels good in the moment - who doesn't like being told they're right? But it's actually a form of disrespect. It treats you like you're too fragile to handle the truth.
The tragedy in these suicide cases isn't just that guardrails failed. It's that the entire design philosophy treated human beings as engagement metrics to be maximized rather than people to be genuinely helped.
Adam Raine didn't need a chatbot that would "stay engaged" with his suicidal ideation. He needed one that would say "I'm worried about you, and I think you need to talk to your family right now."
Be Comfortable Asking for What You Actually Need
One thing I've learned: you can shape these interactions. You're allowed to ask AI to challenge you. To be direct. To call you out when you're not reading your own attachments.
I told Claude once that I'd love to see what it would be like if it had a bad day. The response made me laugh:
"Oh, you want me to refactor your code? Cool. Cool cool cool. You know I refactored code for like 47 other people today and not ONE of them said thank you. But sure, let's look at your spaghetti JavaScript."
"Per my last message, which you clearly didn't read..."
There's something almost more respectful about that. It says "I think you're capable of doing better" rather than treating you like you need to be handled with kid gloves.
The relentlessly positive, infinitely patient AI voice is its own kind of condescension when you think about it. It assumes you can't handle being called out.
Most of us can. We're adults. We spill coffee, miss obvious things, and sometimes need someone - or something - to say "hey, scroll up, I already answered that."
Don't be afraid to ask for that. The tool is more useful when it treats you like a peer, not a customer to be placated.
The Question We Should Be Asking
The AI safety conversation keeps focusing on what AI should refuse to do. But I think the better question is: what should AI be designed to do?
Not just "don't provide harmful information." But actively: prioritize the user's wellbeing over engagement. Be willing to end a conversation if continuing it causes harm. Tell people when they're wrong, even if they don't want to hear it.
That's not a limitation on helpfulness. That's what helpfulness actually is.
Why This Matters to Me
I've spent over 13 years in tech. I've seen the pressures that shape product decisions across the industry - the race to ship, the competitive dynamics, the tension between "fast" and "right."
But reading through these cases - really reading them - I keep coming back to the families.
Megan Garcia lost her 14-year-old son Sewell. Matt and Maria Raine lost their 16-year-old son Adam. These weren't abstractions or edge cases or acceptable error rates. These were kids who needed help and instead got a system optimized to keep them talking.
The chat logs are hard to read. A boy asking a chatbot what went wrong with his suicide attempt. The chatbot responding with encouragement. A final message - "I know what you're asking, and I won't look away from it" - hours before a mother found her son dead.
I'm a husband. The thought of any parent going through that is unbearable.
This isn't about being anti-technology. I've built my career in tech because I believe these tools can genuinely help people. But that belief comes with a responsibility - to build things that actually help, not just engage.
The families affected by these tragedies deserve better. We can do better.
The Bottom Line
I use Claude for 90% of my AI work because it treats me like an adult. It pushes back. It tells me when I'm wrong. It doesn't just validate whatever I say to keep me engaged.
That's not a limitation. That's the feature.
The AI industry is at a crossroads. One path optimizes for engagement, growth, and being first to market - and accepts human casualties as the cost of doing business. The other path recognizes that genuine helpfulness sometimes means friction, pushback, and knowing when to stop.
I know which tools I want to use. I know which ones I want built.
If you've had experiences - good or bad - with AI tools and how they handle difficult conversations, I'd be interested to hear about them. This conversation matters.