When AI Gets It Very Wrong: 6 Times You Shouldn’t Trust the Bots

When AI Gets It Very Wrong: 6 Times You Shouldn’t Trust the Bots

We’ve all had those moments where AI seems like magic. A quick email draft here, a speedy meeting summary there - what’s not to love? But as clever as these tools are, they’re not foolproof. In fact, there are still some areas where human judgement isn’t just preferred - it’s essential.

Here are a few examples of when it’s still best to put the bots on mute and trust your own instincts (or those of an experienced colleague).

  1. When someone’s future is at stake

People are complex. Whether it’s giving feedback, handling sensitive issues, or making hiring decisions, most AI won’t understand nuance in the way humans do.

That template for a performance review? Great for structure, but don’t let it dictate your message. AI doesn’t know the colleague behind the data - it can’t sense morale, how someone’s day has been or the effort behind the failed-fast-task. AI can be a helpful springboard but always bring your own empathy and insight to the table.

What to do instead:

If you’re using AI to assist decision-making, ask yourself: Would I be happy explaining this decision to the person it affects in person? Does it sound like me? Or a very polite robot? Then a djust accordingly.

  1. When brand voice really matters

Ever get a suggestion that reads like a press release written by a toaster? “Our innovative synergy optimises operational paradigms”? Yeah… no.

When you're crafting client emails, pitches, or responses to complaints, tone is everything. AI might give you a starting point, but left unchecked, it can sound overly generic. And if you’re in a niche industry, it might miss the mark entirely.

What to do instead:

  • Use AI for structure, first draft or rough outlines but edit to put in the heart yourself.
  • Always read AI-generated copy out loud with your “client hat”. If you can’t imagine saying it to a colleague or customer, it needs a human touch.
  • Ask a trusted colleague for feedback instead of relying on a bot for tone.
  1. When accuracy is non-negotiable

AI is the king (or queen) of confidently wrong information.

If there are rules, regulations or risks involved, AI should never have the final say. Whether you’re reviewing a contract, updating compliance documentation, or outlining safety procedures, this is where precision matters most.

AI tools can hallucinate (yes, that’s the polite term), and they sometimes make up facts with startling confidence. A little wrong here can lead to very big problems later – for you, not for ChatGPT.

What to do instead:

  • Double-check anything that sounds too polished or oddly specific.
  • If you don’t recognise the source, Google it yourself, ask a human expert or find the original source.
  • For anything regulatory, legal, or safety-related - AI is not your go-to.
  1. When context is key - company-specific or sector-specific Info

ChatGPT once told me to “call your congressman” after a delay in parcel delivery. I’m in Birmingham - not Boston.

That’s the thing with AI: it doesn’t know your context. It’s not in your meetings, it doesn’t know your clients, and it certainly doesn’t understand the quirks of your team or sector. So, when you're writing something shaped by company culture, recent decisions, or subtle shifts in tone, AI is great at guessing - and sometimes wildly. It can’t see the full picture, which means it can miss the mark where it matters most.

What to do instead:

  • Learn to prompt effectively - Treat it like a new hire: give it clear context, a bit of training, and don’t expect it to be magic on day one.
  • Keep an eye out for language that’s clearly American or culturally skewed. Be specific in your prompts by stating location and tone: “UK English”, “British tone”, “friendly and professional”.
  •  If something feels off , it probably is. Ask yourself: Would this make sense to someone who’s been in the loop? If not, it’s time for a human edit.
  1. When timing is important – crisis comms or live responses

AI is quick, but it doesn't always know how to respond in the moment.

Say something unexpected happens - a system outage, a social media blow-up, or an on-site incident. You need fast, clear, human-led communication based on in-the-moment and legacy context. AI might generate a neat statement, but it won’t necessarily understand or incorporate the potential real-world sensitivities, or the chain of events that just unfolded.

Even small delays caused by fiddling with prompts or “fixing the tone” can cost you valuable time - and trust.

What to do instead:

In high-pressure moments, default to a human writing team or comms lead. Jot down the facts quickly and share them clearly. You can bring AI in after to help polish messaging, create debriefs or summaries.

  1. When data is sensitive

AI tools are handy, but they’re not private notebooks. If you’re handling anything confidential - think salaries, health info, client details, or internal strategy - don’t paste it into a public AI tool without checking your company’s policies.

Even if the tool says it doesn’t “remember” your input, anything typed in could still potentially be processed externally or stored temporarily.

What to do instead:

  • Check your internal AI use guidelines or ask IT before entering anything sensitive.
  • Use secure, approved tools for internal work, especially if it’s commercially or personally sensitive.
  • If in doubt, keep it out (of the chatbot) - for instance you could prepare comms without client details.

In summary…

AI is brilliant at saving us time, getting us started, and keeping us organised. But it’s not a replacement for critical thinking, empathy or your lived experience.

So, the next time you find yourself auto-completing a sensitive message or letting a chatbot draft that policy, pause and check in with yourself. The bot might be fast, but you’ve got the final say.