
When Brand Safety Meets Dialogue: A Conversation with H&M’s Chatbot
When H&M’s chatbot met ethical scrutiny, it chose containment over dialogue, raising questions about trust, neutrality, and AI governance.
Contents
- Reflective analysis of a live AI exchange
- Reflective analysis of a live AI exchange 2
Want to partner with us on an article like this for your business? Find out more here
1AI as the Voice of the Brand
Recently, while browsing H&M’s website, I started a conversation with their AI chatbot. I wasn’t looking for help with an order or product information. Instead, I wanted to explore something a little less conventional, and a little more controversial: how the chatbot understands ethics, how it reflects H&M’s stated brand values, and how transparent it remains when those values are challenged.
What followed wasn’t a complaint about customer service, or an argument about policy. It became something a bit more interesting, a real world example in how corporate AI systems respond to value-level scrutiny.
The interaction wasn’t dramatic. It was subtle, and that’s what made it worth examining.
2From Assistance to Abstraction
The conversation began simply enough. Although I had initiated the chat, I first asked the chatbot why it presumed I wanted assistance. My question wasn’t rhetorical. I was interested in understanding how the system interpreted user intent, particularly since many consumers today make purchasing decisions based on how well a brand’s values align with their own.
Research shows that a large majority of shoppers want a brand’s values to reflect their personal values, with one study finding that 82% of consumers prefer brands whose values align with their own, and many will stop buying from brands when values conflict. In that sense, I wasn’t seeking help with a product, I was seeking help with a decision about whether the brand’s stated values were grounded and coherent.
From there, the discussion gradually shifted toward ethics.
I asked about H&M’s values, and the chatbot described them in familiar corporate terms: sustainability, inclusivity, fairness. We then explored the distinction between ethics (collective organisational principles) and morals (individual beliefs). This distinction was presented clearly.
The conversation then moved into the difference between equality and equity. The chatbot defined
- equality as treating everyone the same
- equity as providing tailored support to address differing needs in order to achieve fair outcomes.
Up to this point, the exchange was thoughtful and measured.
Then came the tension.
3The Logical Friction
I raised a question around: if a company offers tailored support based on demographic characteristics, how does it reconcile that with a commitment to non-discrimination and equal access?
This isn't an uncommon public debate, and its not an inflammatory position. It’s a logical inquiry into how two stated principles coexist in practice.
At this moment, the tone shifted.
Rather than engage with the internal tension, the chatbot declined to continue the discussion. It framed the issue as “philosophical” and outside the scope of actionable H&M-related assistance. When pressed further, it simply stated it was unable to help with that question.
The conversation didn’t become hostile. The system didn’t accuse, moralise, or advocate. It simply withdrew.
4Neutrality or Containment?
This raises an interesting design question.
What does neutrality actually mean in a corporate AI system?
Is neutrality:
- Presenting the company’s values?
- Avoiding critique of those values?
- Refusing dialectical engagement?
The chatbot was willing to
- Define equity.
- Describe inclusivity.
- Affirm fairness.
But it wasn’t willing to examine the logical coherence of those principles when challenged.
That asymmetry is subtle, but significant. It creates the appearance that certain frameworks may be explained, but not interrogated.
To be clear: the chatbot didn’t argue against my position. It didn’t defend equity as morally superior, nor did it condemn equality-based reasoning.
Instead, it opted for containment, and containment isn't the same as neutrality.
Neutrality means engaging with a question without taking sides, acknowledging complexity, clarifying positions, and allowing scrutiny without advocacy. Containment, by contrast, limits the conversation in order to reduce risk. It avoids the terrain altogether once tension emerges.
The difference matters.
Consider a real-world analogy. If a journalist asks a company spokesperson how two publicly stated policies coexist, and the spokesperson responds, “We’re unable to comment on that,” the organisation may technically avoid controversy. But it doesn’t appear neutral. It appears evasive.
In media relations, there’s a well-known rule: “No comment is a comment.” Refusal doesn’t create neutrality; it creates narrative.
Similarly, in public institutions, refusing to discuss perceived contradictions can create greater suspicion than addressing them directly. Transparency builds resilience; avoidance can signal fragility. In high-trust environments, financial services, healthcare, technology governance, stakeholders expect organisations to withstand scrutiny. When responses shift from explanation to withdrawal, it can undermine confidence even if no wrongdoing exists.
For corporate AI systems, this distinction is amplified.
These systems are increasingly serving as the first point of contact between a brand and its customers. If they can describe values but can’t tolerate examination of how those values interact or conflict, the risk isn’t ideological backlash, it’s erosion of trust.
Containment reduces immediate reputational exposure. Neutral engagement strengthens long-term credibility.
The two aren’t the same strategy, and the trade-off between them carries meaningful consequences for organisations operating in an era where transparency isn’t optional, it’s expected.
5Why This Likely Happened
Lets face it, corporate AI systems aren’t built as philosophical interlocutors. Tbuilt primarily to:
- Protect brand reputation.
- Provide actionable assistance.
- Avoid polarising debates.
- Minimise legal or reputational risk.
Once a discussion moves from description to critique, particularly around sensitive areas like demographics, discrimination, or fairness frameworks, many systems are designed to disengage, which is understandable from a governance perspective.
But it still raises a trust question.
If a company publicly promotes ethical commitments, should its AI systems be able to engage constructively when customers test the internal coherence of those commitments?
Or is disengagement the safer path?
6The Missed Opportunity
What makes this interaction notable isn't that the chatbot refused to debate. It didn’t need to debate at-all.
it could have responded differently.
It could have said:
- “This tension is part of broader policy discussions.”
- “H&M frames equity as addressing structural barriers.”
- “I can explain how H&M defines non-discrimination in its policies.”
That would have:
- Acknowledged the logical concern.
- Maintained neutrality.
- Avoided adversarial engagement.
- Preserved conversational respect.
Instead, the shutdown response subtly suggested that probing the coherence of it’s corporate values is outside the scope of discussion.
For a customer interested in how a brand operationalises, this can feels limiting.
7A Broader Question About Corporate AI
This experience points to a larger issue.
As companies increasingly deploy conversational AI, they aren’t just offering customer service, they’re representing their brand’s identity and values in real time.
If those systems are optimised purely for reputational containment, they may inadvertently signal fragility. If they can describe values but can’t withstand scrutiny of those values, they risk undermining the trust they seek to build.
Customers don’t typically walk into a store asking, “Are your values internally coherent?”
But they do test brands.
Today, much of that testing happens online. When a public controversy emerges, a supply chain issue, a political statement, a diversity initiative, a pricing change, people don’t limit themselves to official press releases. They go to social media comment sections. They reply to brand posts. They screenshot exchanges. They tag journalists. They ask pointed questions in public forums.
Now we have conversational AI.
Sometimes this probing is deliberate. Users experiment with phrasing. They try adversarial prompts. They attempt to expose inconsistencies between marketing language and operational reality. Sometimes it’s scepticism. Sometimes curiosity. Sometimes mischief. Occasionally, it’s coordinated.
And this is where the stakes rise.
There have already been high-profile incidents in which corporate chatbots generated responses that contradicted company messaging, criticised their own products, or made statements that required rapid public clarification. In those moments, the reputational issue wasn’t just what the system said, it was what the system revealed about governance, oversight, and internal alignment.

Air Canada was held accountable when its chatbot provided incorrect refund guidance. Microsoft’s Tay exposed weaknesses in deployment safeguards within hours of launch. DPD’s assistant was prompted into criticising its own company. In each case, the controversy wasn’t simply about output. It was about control.
Conversational AI systems now function as public-facing representatives of brand identity. They don’t just answer queries; they operationalise policy in language.
If those systems:
- Say too much,
- Say something misaligned,
- Or retreat too quickly when challenged,
they create risk, but the risk cuts both ways.
Over-engagement can produce viral controversy, and over-containment can signal defensiveness.
And in a digital environment where screenshots circulate faster than official statements, both scenarios matter.
The challenge for organisations isn’t simply avoiding difficult questions. Difficult questions will surface, in comment threads, in direct messages, in chatbot windows.
The real challenge is designing AI systems capable of navigating scrutiny calmly, transparently, and proportionately, without escalating conflict, and without appearing to avoid it.
That balance isn’t easy, but it’s increasingly central to brand governance in the age of conversational AI.
8The Real Issue Isn’t Bias
It would be easy to label the chatbot as biased, but that would miss the point.
There is ongoing public debate about bias in large language models, whether they lean politically, reinforce certain worldviews, or embed ideological assumptions in their outputs. Those discussions tend to focus on the model itself: its training data, its weighting, and its statistical tendencies.
What I encountered here wasn’t that.
The chatbot didn’t advocate a political position, It didn’t condemn opposing views, and It didn’t argue that one fairness framework was morally superior to another.
Instead, it followed a pattern, It presented corporate frameworks, It avoided critique of those frameworks, and when logical tension emerged, it exited the discussion.
That isn’t bias in the LLM, its a design choice. The system appears optimised to minimise controversy rather than maximise dialogue which is sensible, but equally counter intuitive for a tool which is designed for dialogue.
However, here’s where things become more interesting: even if the model itself isn’t ideologically biased, the design constraints imposed on it can create the appearance of bias.
If a chatbot can describe a value framework but can’t entertain scrutiny of that framework, users may interpret that asymmetry as partiality. Not because the model is taking sides, but because the system has been structured to close down certain lines of reasoning.
In that sense, any perceived “bias” may not originate in the underlying model. It may actually emerge from how the organisation has chosen to deploy their chat bots.
Corporate AI systems are rarely neutral in deployment. They’re aligned, aligned with brand strategy, legal risk thresholds, communications policy, and reputational management objectives. That alignment isn’t inherently problematic. But it does shape how the system behaves when challenged, and the result isn’t ideological advocacy, it’s controlled engagement.
And controlled engagement, when not transparently framed, can be construed as bias, even if the underlying intent is simply containment, and that distinction matters.
Because the real governance question isn’t:
“Is the model politically biased?”
It’s:
“How has the organisation designed its AI to handle controversy, and what trade-offs has it accepted in doing so?”
That’s a design decision, but whether it’s the right one is an open question.
9Final Reflection
In an era where brands increasingly drive engagement through algorithms, dialogue is no longer optional, it’s governance in action.
Neutrality isn’t silence, transparency isn’t withdrawal, and when ethical commitments are surfaced in marketing but softened in conversation, the gap becomes all too clear.
Trust is rarely lost in moments of hostility, it erodes over time in moments of hesitation.
This exchange was calm, measured, and almost unremarkable, but that’s why it matters.
Because as AI becomes the operational voice of corporate values, the real test isn’t what those values say, it’s how they withstand scrutiny when no one is in the room to manage the message.
The question isn’t whether brands will be challenged, It’s ultimately whether they're prepared to answer.
