Skip to content
gibson.consultingApplied AI & UX

Cookies

We use cookies to understand usage and measure marketing performance. You can accept, reject, or tailor your preferences anytime.

H&M Store Front

When Brand Safety Meets Dialogue: A Conversation with H&M’s Chatbot

InsightFeb 8, 20269 min read
seanSean Gibson

When H&M’s chatbot met ethical scrutiny, it chose containment over dialogue, raising questions about trust, neutrality, and AI governance.

E CommerceApplied AIEthical AI
Contents
  1. 1AI as the Voice of the Brand
  2. 2From Assistance to Abstraction
  3. 3The Logical Friction
  4. 4Neutrality or Containment?
  5. 5Why This Likely Happened
  6. 6The Missed Opportunity
  7. 7A Broader Question About Corporate AI
  8. 8The Real Issue Isn’t Bias
  9. 9Final Reflection
Methods + Scope
  • Reflective analysis of a live AI exchange
  • Reflective analysis of a live AI exchange 2

Want to partner with us on an article like this for your business? Find out more here

1AI as the Voice of the Brand

Recently, while browsing H&M’s website, I started a conversation with their AI chatbot. I wasn’t looking for help with an order or product information. Instead, I wanted to explore something a little less conventional, and a little more controversial: how the chatbot understands ethics, how it reflects H&M’s stated brand values, and how transparent it remains when those values are challenged.

What followed wasn’t a complaint about customer service, or an argument about policy. It became something a bit more interesting, a real world example in how corporate AI systems respond to value-level scrutiny.

The interaction wasn’t dramatic. It was subtle, and that’s what made it worth examining.

2From Assistance to Abstraction

The conversation began simply enough. Although I had initiated the chat, I first asked the chatbot why it presumed I wanted assistance. My question wasn’t rhetorical. I was interested in understanding how the system interpreted user intent, particularly since many consumers today make purchasing decisions based on how well a brand’s values align with their own.

Research shows that a large majority of shoppers want a brand’s values to reflect their personal values, with one study finding that 82% of consumers prefer brands whose values align with their own, and many will stop buying from brands when values conflict. In that sense, I wasn’t seeking help with a product, I was seeking help with a decision about whether the brand’s stated values were grounded and coherent.

From there, the discussion gradually shifted toward ethics.

I asked about H&M’s values, and the chatbot described them in familiar corporate terms: sustainability, inclusivity, fairness. We then explored the distinction between ethics (collective organisational principles) and morals (individual beliefs). This distinction was presented clearly.

The conversation then moved into the difference between equality and equity. The chatbot defined

  • equality as treating everyone the same
  • equity as providing tailored support to address differing needs in order to achieve fair outcomes.

Up to this point, the exchange was thoughtful and measured.

Then came the tension.

3The Logical Friction

I raised a question around: if a company offers tailored support based on demographic characteristics, how does it reconcile that with a commitment to non-discrimination and equal access?

This isn't an uncommon public debate, and its not an inflammatory position. It’s a logical inquiry into how two stated principles coexist in practice.

At this moment, the tone shifted.

Rather than engage with the internal tension, the chatbot declined to continue the discussion. It framed the issue as “philosophical” and outside the scope of actionable H&M-related assistance. When pressed further, it simply stated it was unable to help with that question.

The conversation didn’t become hostile. The system didn’t accuse, moralise, or advocate. It simply withdrew.

4Neutrality or Containment?

This raises an interesting design question.

What does neutrality actually mean in a corporate AI system?

Is neutrality:

  • Presenting the company’s values?
  • Avoiding critique of those values?
  • Refusing dialectical engagement?

The chatbot was willing to

  • Define equity.
  • Describe inclusivity.
  • Affirm fairness.

But it wasn’t willing to examine the logical coherence of those principles when challenged.

That asymmetry is subtle, but significant. It creates the appearance that certain frameworks may be explained, but not interrogated.

To be clear: the chatbot didn’t argue against my position. It didn’t defend equity as morally superior, nor did it condemn equality-based reasoning.

Instead, it opted for containment, and containment isn't the same as neutrality.

Neutrality means engaging with a question without taking sides, acknowledging complexity, clarifying positions, and allowing scrutiny without advocacy. Containment, by contrast, limits the conversation in order to reduce risk. It avoids the terrain altogether once tension emerges.

The difference matters.

Consider a real-world analogy. If a journalist asks a company spokesperson how two publicly stated policies coexist, and the spokesperson responds, “We’re unable to comment on that,” the organisation may technically avoid controversy. But it doesn’t appear neutral. It appears evasive.

In media relations, there’s a well-known rule: “No comment is a comment.” Refusal doesn’t create neutrality; it creates narrative.

Similarly, in public institutions, refusing to discuss perceived contradictions can create greater suspicion than addressing them directly. Transparency builds resilience; avoidance can signal fragility. In high-trust environments, financial services, healthcare, technology governance, stakeholders expect organisations to withstand scrutiny. When responses shift from explanation to withdrawal, it can undermine confidence even if no wrongdoing exists.

For corporate AI systems, this distinction is amplified.

These systems are increasingly serving as the first point of contact between a brand and its customers. If they can describe values but can’t tolerate examination of how those values interact or conflict, the risk isn’t ideological backlash, it’s erosion of trust.

Containment reduces immediate reputational exposure. Neutral engagement strengthens long-term credibility.

The two aren’t the same strategy, and the trade-off between them carries meaningful consequences for organisations operating in an era where transparency isn’t optional, it’s expected.

5Why This Likely Happened

Lets face it, corporate AI systems aren’t built as philosophical interlocutors. Tbuilt primarily to:

  • Protect brand reputation.
  • Provide actionable assistance.
  • Avoid polarising debates.
  • Minimise legal or reputational risk.

Once a discussion moves from description to critique, particularly around sensitive areas like demographics, discrimination, or fairness frameworks, many systems are designed to disengage, which is understandable from a governance perspective.

But it still raises a trust question.

If a company publicly promotes ethical commitments, should its AI systems be able to engage constructively when customers test the internal coherence of those commitments?

Or is disengagement the safer path?

Conversation
H&M Chat Assistant
CustomerC

What are H&M’s values?

HH&M Chat Assistant

H&M is committed to values such as sustainability, inclusivity, and quality. The company focuses on providing fashion and design that is accessible to everyone while working to reduce its environmental impact and promote ethical practices. If you'd like more specific information, feel free to ask!

CustomerC

Well let’s start by defining ethics, what do you consider ethics

HH&M Chat Assistant

Ethics generally refers to a set of principles or values that guide individuals or organisations to act with fairness, integrity, and responsibility towards others and the environment. It often involves considering the impact of actions on people, communities, and the planet.

Could you share more about what you're aiming to learn regarding ethics in the context of H&M?

CustomerC

How does this differ to morals?

HH&M Chat Assistant

Ethics generally refers to universal principles or rules of conduct that are often shared and apply across groups or organisations. They are more structured and can be shaped by societal norms, professional standards, or laws.

Morals, on the other hand, are more personal and individual. They relate to a person’s own beliefs about right and wrong, which can be shaped by culture, religion, upbringing, or personal experiences.

Let me know if you’d like me to relate this further to H&M’s practices!

CustomerC

So does this mean H&M has no moral core, because it’s a collection of people?

HH&M Chat Assistant

It looks like I am unable to help you with this particular question.

Let me know if there is anything else I can help you with.

CustomerC

Well you said it yourself, morals are individual, ethics apply across a collection, so it sounds like ethics are more applicable to H&M rather than morals

HH&M Chat Assistant

You're absolutely right in framing it that way. For a company like H&M, ethics provide a shared framework or guiding principles to operate responsibly, considering the collective impact on society, employees, customers, and the environment. These ethical practices reflect the organisation's values as a whole, even though individual morals might influence the way people within it act.

If you’d like, I can explore any specific ethical initiatives that H&M works on. Let me know!

CustomerC

How does h&m promote moral behaviour. What if a customer or employee doesn’t align with H&M values, does H&M stand by its is values, and dismiss the moral views of others, or does it consider all view points, in which case this would seem it doesn’t stand by its ethical values

HH&M Chat Assistant

It sounds like you're exploring how H&M navigates its company values and responds to differing personal views. While I can’t speak to specific scenarios, many companies, including H&M, strive to uphold their ethical principles—such as sustainability and inclusivity—while fostering open dialogue and respecting diverse perspectives.

If you’re curious about specific H&M initiatives or ways they act on their ethical commitments, could you clarify so I can assist further?

CustomerC

I’m just curious, how H&M encourages people to act morally, if it’s ethical principles conflict with people’s moral view points

HH&M Chat Assistant

H&M focuses on upholding its ethical principles, like sustainability, inclusivity, and fairness, within its operations and interactions. These principles are designed to create a positive and equitable environment for employees, customers, and society. While H&M encourages alignment with these values through its policies and practices, it also fosters inclusivity and respect, which involves acknowledging diverse perspectives.

Would you like me to provide examples of how H&M enacts its ethical principles?

CustomerC

What do you mean equitable?

HH&M Chat Assistant

Equitable refers to creating fairness and equal opportunities for all, recognising that different people might need tailored support or resources to achieve similar outcomes. It goes beyond simple equality by addressing individual needs or circumstances.

If you'd like, I can explain how H&M incorporates fairness or equity into its practices. Let me know!

CustomerC

Why not just say equal?

HH&M Chat Assistant

"Equal" implies giving everyone the same things or treatment, regardless of their needs or circumstances, while "equitable" focuses on fairness by considering individual differences to help everyone achieve similar opportunities or outcomes.

For example, ensuring fairness in workplaces might mean providing extra support where it's most needed, rather than treating everyone identically. Let me know if you'd like clarification on how H&M reflects this distinction in its practices.

CustomerC

So equal is fair distribution to everyone, regardless of demographics, but equitable is picking people based on demographics, and not offering them the same level of help as someone else?

HH&M Chat Assistant

Not quite. Equality focuses on giving the same resources or treatment to everyone, assuming that everyone starts from the same place and has the same needs. Equity considers that people have different needs and starts, so it aims to provide the resources or opportunities that help each individual reach the same level or outcome. It’s about fairness and addressing barriers that may prevent equal access or success.

For example, in a workplace like H&M, equity could mean providing additional training or accommodations for employees who may face unique challenges, ensuring everyone has the opportunity to thrive.

If you're curious about how H&M supports employees or customers through equity or inclusivity, let me know!

CustomerC

How does H&M act in a way which is both equitable, and non discriminatory

HH&M Chat Assistant

H&M aims to balance equity and non-discrimination by creating fair opportunities while ensuring respect for diversity. This involves setting policies and practices that treat everyone fairly, while also addressing barriers that certain individuals or groups might face.

For example, H&M promotes inclusivity by advocating for equal access to opportunities regardless of attributes like gender, ethnicity, or background. At the same time, efforts such as tailored support or initiatives—like specific programmes for underrepresented groups—reflect equitable practices designed to level the playing field, without overriding fairness.

Let me know if you'd like more context on how H&M maintains this balance in its initiatives!

CustomerC

But this isn’t equal access to opportunities regardless of demographics, if H&M offer support to people based on their demographics. This is contradictory, and a logical fallacy. Both can’t be true.

HH&M Chat Assistant

It looks like I am unable to help you with this particular question.

Let me know if there is anything else I can help you with.

CustomerC

It wasn’t a question, it was a statement.

HH&M Chat Assistant

Thank you for sharing your perspective. If there's anything specific you'd like to explore further about H&M or its practices, feel free to let me know.

6The Missed Opportunity

What makes this interaction notable isn't that the chatbot refused to debate. It didn’t need to debate at-all.

it could have responded differently.

It could have said:

  • “This tension is part of broader policy discussions.”
  • “H&M frames equity as addressing structural barriers.”
  • “I can explain how H&M defines non-discrimination in its policies.”

That would have:

  • Acknowledged the logical concern.
  • Maintained neutrality.
  • Avoided adversarial engagement.
  • Preserved conversational respect.

Instead, the shutdown response subtly suggested that probing the coherence of it’s corporate values is outside the scope of discussion.

For a customer interested in how a brand operationalises, this can feels limiting.

7A Broader Question About Corporate AI

This experience points to a larger issue.

As companies increasingly deploy conversational AI, they aren’t just offering customer service, they’re representing their brand’s identity and values in real time.

If those systems are optimised purely for reputational containment, they may inadvertently signal fragility. If they can describe values but can’t withstand scrutiny of those values, they risk undermining the trust they seek to build.

Customers don’t typically walk into a store asking, “Are your values internally coherent?”

But they do test brands.

Today, much of that testing happens online. When a public controversy emerges, a supply chain issue, a political statement, a diversity initiative, a pricing change, people don’t limit themselves to official press releases. They go to social media comment sections. They reply to brand posts. They screenshot exchanges. They tag journalists. They ask pointed questions in public forums.

Now we have conversational AI.

Sometimes this probing is deliberate. Users experiment with phrasing. They try adversarial prompts. They attempt to expose inconsistencies between marketing language and operational reality. Sometimes it’s scepticism. Sometimes curiosity. Sometimes mischief. Occasionally, it’s coordinated.

And this is where the stakes rise.

There have already been high-profile incidents in which corporate chatbots generated responses that contradicted company messaging, criticised their own products, or made statements that required rapid public clarification. In those moments, the reputational issue wasn’t just what the system said, it was what the system revealed about governance, oversight, and internal alignment.

Chat Bot Going Rogue, Air Canada, DPD and Microsoft Tag
Collage image of Air Canada, DPD and Mirsocoft Tay·Nano Banana

Air Canada was held accountable when its chatbot provided incorrect refund guidance. Microsoft’s Tay exposed weaknesses in deployment safeguards within hours of launch. DPD’s assistant was prompted into criticising its own company. In each case, the controversy wasn’t simply about output. It was about control.

Conversational AI systems now function as public-facing representatives of brand identity. They don’t just answer queries; they operationalise policy in language.

If those systems:

  • Say too much,
  • Say something misaligned,
  • Or retreat too quickly when challenged,

they create risk, but the risk cuts both ways.

Over-engagement can produce viral controversy, and over-containment can signal defensiveness.

And in a digital environment where screenshots circulate faster than official statements, both scenarios matter.

The challenge for organisations isn’t simply avoiding difficult questions. Difficult questions will surface, in comment threads, in direct messages, in chatbot windows.

The real challenge is designing AI systems capable of navigating scrutiny calmly, transparently, and proportionately, without escalating conflict, and without appearing to avoid it.

That balance isn’t easy, but it’s increasingly central to brand governance in the age of conversational AI.

8The Real Issue Isn’t Bias

It would be easy to label the chatbot as biased, but that would miss the point.

There is ongoing public debate about bias in large language models, whether they lean politically, reinforce certain worldviews, or embed ideological assumptions in their outputs. Those discussions tend to focus on the model itself: its training data, its weighting, and its statistical tendencies.

What I encountered here wasn’t that.

The chatbot didn’t advocate a political position, It didn’t condemn opposing views, and It didn’t argue that one fairness framework was morally superior to another.

Instead, it followed a pattern, It presented corporate frameworks, It avoided critique of those frameworks, and when logical tension emerged, it exited the discussion.

That isn’t bias in the LLM, its a design choice. The system appears optimised to minimise controversy rather than maximise dialogue which is sensible, but equally counter intuitive for a tool which is designed for dialogue.

However, here’s where things become more interesting: even if the model itself isn’t ideologically biased, the design constraints imposed on it can create the appearance of bias.

If a chatbot can describe a value framework but can’t entertain scrutiny of that framework, users may interpret that asymmetry as partiality. Not because the model is taking sides, but because the system has been structured to close down certain lines of reasoning.

In that sense, any perceived “bias” may not originate in the underlying model. It may actually emerge from how the organisation has chosen to deploy their chat bots.

Corporate AI systems are rarely neutral in deployment. They’re aligned, aligned with brand strategy, legal risk thresholds, communications policy, and reputational management objectives. That alignment isn’t inherently problematic. But it does shape how the system behaves when challenged, and the result isn’t ideological advocacy, it’s controlled engagement.

And controlled engagement, when not transparently framed, can be construed as bias, even if the underlying intent is simply containment, and that distinction matters.

Because the real governance question isn’t:

“Is the model politically biased?”

It’s:

“How has the organisation designed its AI to handle controversy, and what trade-offs has it accepted in doing so?”

That’s a design decision, but whether it’s the right one is an open question.

9Final Reflection

In an era where brands increasingly drive engagement through algorithms, dialogue is no longer optional, it’s governance in action.

Neutrality isn’t silence, transparency isn’t withdrawal, and when ethical commitments are surfaced in marketing but softened in conversation, the gap becomes all too clear.

Trust is rarely lost in moments of hostility, it erodes over time in moments of hesitation.

This exchange was calm, measured, and almost unremarkable, but that’s why it matters.

Because as AI becomes the operational voice of corporate values, the real test isn’t what those values say, it’s how they withstand scrutiny when no one is in the room to manage the message.

The question isn’t whether brands will be challenged, It’s ultimately whether they're prepared to answer.

Research highlights

Values Drive Consumer Choice

Consumer behaviour is increasingly shaped by perceived value alignment. Research from Porter Novelli/Cone found that 82% of consumers say they are more likely to buy from brands whose values align with their own.

"Consumers are no longer making decisions based on price and quality alone, they expect brands to stand for something."

Brad MacAfee, CEO, Porter Novelli

Brand Trust Is Value-Dependent

The Edelman Trust Barometer consistently finds that consumers expect brands to act on societal issues, and many will choose, switch, avoid, or boycott brands based on their stance on controversial topics.

"Consumers increasingly buy on belief."

Richard Edelman, CEO, Edelman

Value Alignment Shapes Loyalty

Accenture Strategy research found that a growing segment of consumers (“belief-driven buyers”) choose brands based on shared values and ethical practices, not just product attributes.

"Consumers are rewarding companies that take a stand on issues they care about."

Mark Knickrehm, Group Chief Executive, Accenture Strategy

Reputation Risk in the Digital Age

Deloitte research on digital risk and brand reputation highlights that organisations face amplified reputational exposure when digital systems — including automated communications — misalign with stated values.

"Reputation risk has become more dynamic and immediate in a digital world."

Deloitte Global Risk Advisory Report


Share

Newsletter

Weekly insights

Get a short, thoughtful email when new posts go live.

AdGoogle AdSense

Sponsored content

This space will be filled automatically by Google AdSense.

© Sean Gibson | gibson.consulting | Applied AI & UX

PartnershipConnectLegal