As someone who has spent years working with global brands, I’ve developed what I jokingly call a professional reflex. Whenever I see a product or service – be it a marketing campaign, a brand name, a piece of UX – I instinctively ask: How would this land in France? Or in Brazil? Or Japan?

It’s a habit I’ve picked up over time, and by now it feels more like instinct than conscious analysis. Call it an obsession, or maybe a kind of trained empathy. Whatever it is, it’s always running in the background.

So when generative AI tools exploded into the mainstream, it was only natural that I began wondering how they might land differently around the world. Not just in terms of the languages they support or the accents they mimic, but the deeper assumptions behind how they speak, suggest, decide, and behave.

Would people in different countries feel comfortable with the way AI chats with them? Would it come across as friendly or intrusive? Empowering or alienating? Would it understand local humour? Would it know what not to say?

More importantly…would people trust it?

I find myself asking a different kind of question these days, knowing full well the answers are complex. Because this time, we’re not just localising products or messaging. We’re localising intelligence.

The AI Index Report Confirmed What I Suspected

I recently came across the 2025 AI Index Report, which offers a compelling snapshot of global public opinion on AI – and reveals deep regional divides in how its benefits are perceived.

In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI as offering more benefits than drawbacks. Contrast that with Canada (40%), the U.S. (39%), and the Netherlands (36%), where public sentiment remains far more sceptical. And even in places where optimism is growing, concerns around fairness, bias, and trust remain stubbornly persistent.

This disparity is telling. It suggests that people are not responding to AI in the same way everywhere – and that those responses are shaped not just by access or infrastructure, but by something more layered: cultural context, historical memory, societal norms, and collective trust.

AI, in other words, isn’t a neutral technology moving through the world. It’s a phenomenon being interpreted through different lenses – and carrying different meanings as it does so.

AI Doesn’t Have a Global Image. It Has Many.

What struck me most from the report was this: AI, when productised, doesn’t behave like a single brand with a uniform reputation. It’s not Coca-Cola. It’s not the iPhone.

Instead, it’s a fragmented phenomenon – perceived as liberating in one country, suspicious in another, potentially job-threatening here, and educational there.

This fragmented global reception has profound implications for how AI products are built, marketed, adopted, and governed.

Let me illustrate a few thoughts on how this plays out across different domains:

For AI startups, the typical focus is on functionality and scale. But if AI’s meaning shifts from culture to culture, product-market fit must also include a “cultural trust fit.” Messaging around innovation, productivity, or augmentation needs to be contextualised: is AI seen as empowering? Intrusive? Inevitable? Alienating? Startups entering new markets must do more than translate interfaces – they must translate intent. For example, a generative AI writing tool marketed in the U.S. might lead with productivity and creativity; in France, it may need to speak more to artistic integrity and cultural authenticity.

For consumer brands touting their “AI-first” transformation, there’s a risk in assuming AI always signals progress. Recent backlash faced by Duolingo and Klarna shows what happens when users perceive AI as replacing, rather than enhancing, the human touch. In low-optimism markets like Canada, the U.S., and the Netherlands, AI branding can trigger anxiety rather than excitement. Brands must learn to balance global AI positioning with local storytelling. In some markets, “AI-enhanced” may resonate more than “AI-first.” Bringing in local influencers, educators, or creators can serve as cultural bridges that build trust.

Let’s Talk About Model Behaviour

If the perception of AI varies across cultures, then surely the behaviour of AI models should reflect that too. And yet many of today’s most powerful systems – ChatGPT, Claude, Gemini – are trained largely on English-language, Western-centric data. They perform impressively across languages, but that doesn’t mean they grasp cultural nuance or behavioural expectations.

Translation isn’t localisation. Speaking a language correctly isn’t the same as knowing when to be formal, when to show deference, or when to exercise restraint. Tone, timing, and formality are culturally constructed.

An AI assistant that’s optimised for American norms might interrupt too quickly in Japan—or sidestep topics that, in another context, are essential to address. What’s seen as helpful in one country might be interpreted as evasive or inappropriate in another.

We’ve long accepted that brands need regional voices. Why shouldn’t AI?

Model Builders Are Starting to Pay Attention

For a while, it seemed this challenge was being overlooked in the rush to scale. But lately, signs of change are emerging.

At VivaTech 2025, I listened to a keynote by Jensen Huang, CEO of NVIDIA. To my surprise – and relief – he raised many of the same questions that had been circling in my mind. His talk focused on digital sovereignty and global ecosystems, and he laid out a vision in which AI models aren’t one-size-fits-all, but rather regionally grounded, culturally intelligent, and ethically plural.

He introduced NeMoTron, NVIDIA’s initiative to support the development of open, adaptable, region-specific large language models. It marked a significant shift – not just technically, but ideologically.

Rather than striving for a universal model that behaves the same everywhere, NeMoTron is built to be adapted, linguistically, behaviourally, even ethically, to suit local needs. It can be fine-tuned on proprietary data, embedded with national regulatory frameworks, and enriched with historical knowledge often missing from Western-centric training sets.

When paired with Perplexity, a partner focused on grounding AI responses in local sources, linguistic norms, and real-time cultural cues, we start to see a future that’s not just intelligent, but pluralistic by design.

From Brand Strategy to Model Design: A Common Lesson

In many ways, this mirrors what global marketers have long understood. The most successful global brands don’t impose one voice across every market. They build a strong, consistent identity, but they allow that identity to flex and adapt.

The same principle should apply to AI.

The goal isn’t to splinter AI into dozens of incompatible systems. It’s to build modular, configurable, and culturally fluent AI experiences that reflect the richness of human difference.

That means:

Rethinking alignment – not as a universal doctrine, but as something to be co-created with local communities.

Designing model behaviour – not just training data – with cultural values in mind.

Embedding localisation not just in content, but in the AI’s tone, persona, and ethical framing.

Toward a Culturally Attuned Intelligence

AI is fast becoming a companion technology – a tool we speak to, and that speaks back. As it takes on this role, the question isn’t just whether it understands us, but how well it reflects who we are, where we are, and what matters to us.

Cultural nuance isn’t an afterthought. It’s the foundation of trust, usability, and relevance.

If AI is to earn its place as a meaningful presence in everyday life, it must learn to do what any respectful guest in a new place does: listen first, speak with care, and adapt to the room.

It’s not enough to build intelligent systems.

We need to build attuned ones.

At the Google I/O conference last week, a discussion brought together filmmaker Darren Aronofsky, DeepMind CEO Demis Hassabis, and director Eliza McNitt to explore the future of storytelling in the age of AI. It wasn’t just the new apps that sparked buzz – though there were plenty – but what really stood out to me was Ancestra, McNitt’s latest film and the first outcome of the Primordial Soup x DeepMind collaboration.

In a setting like this, the conversation could have easily fallen into familiar binaries: hype versus fear, replacement versus enhancement. But instead, it spotlighted something subtler – and far more important. Through Ancestra, McNitt showed that AI isn’t dragging creativity down. It’s lifting it into new dimensions. 

What interested me most wasn’t how AI can replicate what we already know how to do, but how it lets us do things we’ve never done before. 

And that’s where I found myself repeating a thought I keep returning to: there is no such thing as “best practice” in AI-assisted storytelling – only “best fit.” That mindset, I believe, matters far beyond filmmaking.

Ancestra. The Story Begins.

Ancestra is a short film inspired by the day McNitt was born – an emergency C-section that nearly claimed both her and her mother’s lives. It’s a story that is both intimate and difficult to visualize. Without generative AI, the film would have struggled to render the metaphysical, embryonic, and cosmic dimensions of that moment. But with early access to Google DeepMind’s Veo 3 and Flow tools, McNitt was able to reimagine the invisible forces surrounding her birth – cosmic imagery, symbolic renderings of life formation, and scenes that go far beyond what any camera could capture.

“To be honest, I never thought about telling the story of the day I was born – but here we are.”

The film was created using a hybrid pipeline: live-action performances by SAG-AFTRA union actors, full film crew production, and AI-generated videos.

What makes Ancestra meaningful isn’t the unique collaboration or novelty of the technology. It’s the contextual precision with which the technology was used.

Making the Invisible Visible

One of the film’s most powerful creative decisions was the creation of a digital baby – Baby Eliza. Instead of using a real infant on set, which sometimes raises both ethical and logistical challenges, McNitt trained Veo on photographs of herself as a newborn. These were taken by her late father, a renowned photographer, and used to generate scenes that felt emotionally authentic and personally resonant.

To deepen that emotional fidelity, McNitt used a style-transfer tool to infuse the output with her father’s distinct photographic style. The result was not just technically impressive – it felt like the scenes were shot by someone who loved her. In doing so, McNitt extended her family’s artistic legacy into a future-facing medium.

She also used AI to visualize sequences that would be nearly impossible to capture with traditional tools – like a baby’s heartbeat in utero or stylized representations of cellular life and cosmic metaphors. In Ancestra, generative video became not just a visual aid, but a poetic lens for memory, imagination, and emotion.

Start with the Story, Not the Tool

What’s clear from McNitt’s approach is that she didn’t begin with the question “What can Veo do?” She began with a story only she could tell. The technology followed the narrative – not the other way around. That distinction matters.

AI should never dictate creative direction. It should amplify the storyteller’s intent. And what works for one story may not work for another. The way AI fits into the creative process is entirely dependent on the context – the story, the team, the moment, the constraints. In Ancestra, AI was used sparingly, intentionally, and only when it added emotional or narrative value.

AI Didn’t Reduce the Need for a Team – It Transformed the Roles

McNitt described the creative process as “a lot of nightmares” at times – referring to the unpredictability and rough edges of working with early-stage generative models. But rather than resisting that chaos, she leaned into it. She treated it as an expressive medium, not a polished product. Her job as a filmmaker was to shape and interpret the outputs, not expect perfection from them.

“It’s been very interesting to create and see what comes out when you embrace that chaos.”

The production of Ancestra involved over 165 people, including 15 dedicated “generators” – artists who guided Veo’s outputs. This marks a shift in how we think about creative teams. Prompt engineers, AI visualists, and model trainers are becoming as integral to the filmmaking process as cinematographers and editors. McNitt didn’t reduce her team – she redefined it.

Rethinking the Creative Process in the Age of AI

What McNitt’s process reveals is that generative AI doesn’t come with a playbook. You can’t Google your way to meaning. You can’t outsource intuition. Creative judgment still comes from the human – what to keep, what to discard, what to shape, and what to feel.

And as AI tools move further into the worlds of writing, music, design, advertising, and architecture, the temptation will be to chase standardization. To build templates. To copy what worked elsewhere. But the lesson from Ancestra is this: AI isn’t a shortcut to creativity. It’s a prompt for maturity.

There is no right way to use AI. Only a right-for-this way. The only real “best practice” is knowing your intention, your audience, your story – and using AI only when it serves those things.

Human storytelling is not a protocol. It’s a pulse. AI should follow that beat – not override it.

We’ve always wanted to do it.

To create work that’s cohesive across every channel — from print to film to social — without having to brute-force it into consistency later.

To bring local teams into the creative process at the beginning, when it still matters, rather than tagging them in just before the deadline and asking them to “transcreate.”

To finally bring localisation into the heart of production — not bolted on at the end, but baked in from the beginning, so every market’s version can take shape as the content takes shape.

To design with adaptation in mind — not as an afterthought, but as a core principle.

But the truth is, until recently, the tools didn’t exist. Or they existed, but not at scale. And so, we got good at compromise. We got clever at fixing things late. We built processes around silos, because silos were safe.

Then came GenAI. And suddenly, the thing we’ve always wanted — that orchestration of content across markets, mediums, and moments — doesn’t seem impossible anymore. 

It may not be perfect yet, but it shows signs of possibility.

The instinct, of course, is to use the tech to speed up what we already do. Swap a synthetic voice in for a voice actor. Use AI to generate subtitles. Get three versions of a script instead of one.

It’s tempting. It’s useful. But it misses the point.

Because the real value of GenAI isn’t that it makes the existing machine faster. It’s that it lets us build a new machine entirely.

The Shift: From Tasks to Thinking

We don’t just need a more efficient workflow. We need a new kind of workflow — one that reflects how people consume content now.

Fragmented. Fast. Fluent across channels.
Personal, not just localized.
Relevant, not just repurposed.

In this new model, production isn’t linear — it’s layered.

Planning becomes platform-aware. Scripts are seeded with multilingual intent. Slogans written for print evolve into voiceovers. A shot designed for the hero film becomes a still for a product page, a loop for TikTok, or a background for a display ad.

The assets don’t just work harder — they work together.

AI doesn’t replace creativity here — it scaffolds it. It gives global teams a starting point, not a finish line. It lets us think modularly, culturally, and strategically at the same time. If anything, it puts the human imagination more firmly at the centre — because now we’re not just solving problems. We’re designing systems.

Start Where It Matters

So, I’ve started mapping out a living workflow — not a fixed blueprint, but a prototype. A draft for what global content production could look like when AI becomes a true creative partner.

It starts with integrated planning, where format, market, and message are aligned from the outset. Not just what to say, but where, how, and for whom. Not just one campaign, but all its potential versions. Not just global, but global-ready.

This framework breaks down the production cycle into four evolving stages — Planning, Pre-Production, Production, and Post-Production — with outcomes and roles clearly defined for each. It’s illustrative so it may not be perfect. But it’s adaptable. 

From there, pre-production becomes the foundation of adaptability. We use AI to generate multilingual script variants early, build asset libraries that are inherently cross-format, and design storyboards with different channels in mind. Every part of the creative process becomes an input into a wider system — a flywheel, not a funnel.

In production, we think in modules. A performance that works for the hero spot also works for the bumper. A product demo becomes a still image with a CTA. Synthetic voice tracks run alongside human ones — not to replace them, but to offer options. And AI tools help us localize visually in real time.

Then in post, we scale. Smartly. AI engines recompile edits by platform. Dubbing, subtitling, and cultural nuance are handled in hybrid — machine speed, human oversight. We don’t localise at the end. We finish at the end. And we feed what we’ve learned back into the machine for next time.

Because that’s the thing. The workflow itself isn’t static. It’s a work in progress — a living document. Because the tools are changing, the platforms are changing, and our ambitions should be changing too.

Build the Muscle, Not Just the Machine

If there’s one principle to hold onto, it’s agility.

No two projects will use the same tools in the same way. What works for a regional retail rollout won’t work for a global brand film. And that’s okay. The goal isn’t to lock in a perfect process. It’s to build a flexible one.

That means building cross-functional teams that speak the same language — creative, data, AI, strategy.
It means investing in brand-specific training data, so AI outputs aren’t generic but grounded.
It means testing new tools in low-risk environments — subtitling, B-roll, social variants — and then scaling what works.

And above all, it means thinking differently.
Not just faster.
Not just cheaper.
But better.

I’ve Never Felt So Excited About What Comes Next

I’ve said it to colleagues again and again: I’ve never felt so excited about the changes happening in global production.

We’re standing at the edge of a new kind of production — one that’s not just about making things, but about designing systems that make possibilities real.

If we get this right, GenAI won’t just help us do what we already do a little better.
It’ll help us finally do the things we’ve always dreamed of — the things we knew were right — but never had the tools to make happen.

And the best part?

We’ve only just begun.

The concept of AI “hallucinations,” where an AI system generates information that is not based on factual data but rather on its own created narratives, is widely viewed as a significant issue in Generative AI. These outputs can be caused by various factors, such as limitations or biases in the training data, errors in the algorithms, or adversarial attacks. AI hallucinations can have negative consequences, such as spreading misinformation, causing harm, or undermining trust in AI systems. These inaccuracies become particularly problematic in scenarios requiring precise and accurate factual information in the generated output.

However, the perspective on AI hallucinations is shifting, especially in the context of creativity.

In an interview with the New York Times, Sam Altman of OpenAI observed that there’s a thin line between imagination and hallucination. This observation opens up an intriguing possibility: the evolving perception of artificial intelligence’s tendency to fabricate or “hallucinate” information, especially in creative contexts. It highlights a shift from viewing these inaccuracies as flaws to considering them as potential sources of creativity.

In a recent article titled “Hallucinating Toward Creativity” from Bloomberg Businessweek, Colin Dunn, a designer and founder of Visual Electric Co., embraces AI’s unpredictability in image generation, likening it to brainstorming where unexpected ideas can lead to creative breakthroughs. 

Some suggest that not all issues need fixing, “Sometimes hallucinations are actual features – it’s called creativity, and sometimes it’s a bug.” Microsoft CEO Satya Nadella said in a recent interview.

Anastasis Germanidis, CTO of Runway AI Inc. takes AI’s creative unpredictability a step further, by balancing between maintaining groundedness and allowing for fantastical outputs. Runway AI enables the AI to interpret and add to user prompts, leading to unique and surreal creations.

The Nuance of AI-spiration in Creativity

AI’s unpredictable nature as a source of creative inspiration is something that worth exploring. Instead of focusing on just the final generated output, we can focus on the ability of AI to generate novel, unexpected, and unorthodox ideas or concepts. 

This idea of harnessing AI hallucinations for creativity suggests a shift from seeking to entirely eradicate these inaccuracies to understanding and controlling them to foster innovation. The concept is to maintain AI’s creations where they are not entirely detached from the real world but have enough leeway to explore and generate imaginative, creative content.

The key to effectively utilizing AI in these creative processes lies in maintaining a balance. While assisted with AI in the brainstorming process, we should have the freedom to explore and create beyond strict factual confines, its output still needs to be tethered to a level of realism or practicality relevant to the specific application. This approach could potentially lead to ground-breaking advancements in how we perceive and implement AI in creative industries, opening doors to a new era of AI-assisted innovation that blends the best of human creativity with the unique capabilities of artificial intelligence.

Reminder: AI outputs should always be carefully evaluated for accuracy, relevance, and appropriateness, especially in professional or sensitive contexts. Additionally, integrating human oversight ensures that the final outputs align with the intended goals and ethical standards.