As someone who has spent years working with global brands, I’ve developed what I jokingly call a professional reflex. Whenever I see a product or service – be it a marketing campaign, a brand name, a piece of UX – I instinctively ask: How would this land in France? Or in Brazil? Or Japan?

It’s a habit I’ve picked up over time, and by now it feels more like instinct than conscious analysis. Call it an obsession, or maybe a kind of trained empathy. Whatever it is, it’s always running in the background.

So when generative AI tools exploded into the mainstream, it was only natural that I began wondering how they might land differently around the world. Not just in terms of the languages they support or the accents they mimic, but the deeper assumptions behind how they speak, suggest, decide, and behave.

Would people in different countries feel comfortable with the way AI chats with them? Would it come across as friendly or intrusive? Empowering or alienating? Would it understand local humour? Would it know what not to say?

More importantly…would people trust it?

I find myself asking a different kind of question these days, knowing full well the answers are complex. Because this time, we’re not just localising products or messaging. We’re localising intelligence.

The AI Index Report Confirmed What I Suspected

I recently came across the 2025 AI Index Report, which offers a compelling snapshot of global public opinion on AI – and reveals deep regional divides in how its benefits are perceived.

In countries like China (83%), Indonesia (80%), and Thailand (77%), strong majorities see AI as offering more benefits than drawbacks. Contrast that with Canada (40%), the U.S. (39%), and the Netherlands (36%), where public sentiment remains far more sceptical. And even in places where optimism is growing, concerns around fairness, bias, and trust remain stubbornly persistent.

This disparity is telling. It suggests that people are not responding to AI in the same way everywhere – and that those responses are shaped not just by access or infrastructure, but by something more layered: cultural context, historical memory, societal norms, and collective trust.

AI, in other words, isn’t a neutral technology moving through the world. It’s a phenomenon being interpreted through different lenses – and carrying different meanings as it does so.

AI Doesn’t Have a Global Image. It Has Many.

What struck me most from the report was this: AI, when productised, doesn’t behave like a single brand with a uniform reputation. It’s not Coca-Cola. It’s not the iPhone.

Instead, it’s a fragmented phenomenon – perceived as liberating in one country, suspicious in another, potentially job-threatening here, and educational there.

This fragmented global reception has profound implications for how AI products are built, marketed, adopted, and governed.

Let me illustrate a few thoughts on how this plays out across different domains:

For AI startups, the typical focus is on functionality and scale. But if AI’s meaning shifts from culture to culture, product-market fit must also include a “cultural trust fit.” Messaging around innovation, productivity, or augmentation needs to be contextualised: is AI seen as empowering? Intrusive? Inevitable? Alienating? Startups entering new markets must do more than translate interfaces – they must translate intent. For example, a generative AI writing tool marketed in the U.S. might lead with productivity and creativity; in France, it may need to speak more to artistic integrity and cultural authenticity.

For consumer brands touting their “AI-first” transformation, there’s a risk in assuming AI always signals progress. Recent backlash faced by Duolingo and Klarna shows what happens when users perceive AI as replacing, rather than enhancing, the human touch. In low-optimism markets like Canada, the U.S., and the Netherlands, AI branding can trigger anxiety rather than excitement. Brands must learn to balance global AI positioning with local storytelling. In some markets, “AI-enhanced” may resonate more than “AI-first.” Bringing in local influencers, educators, or creators can serve as cultural bridges that build trust.

Let’s Talk About Model Behaviour

If the perception of AI varies across cultures, then surely the behaviour of AI models should reflect that too. And yet many of today’s most powerful systems – ChatGPT, Claude, Gemini – are trained largely on English-language, Western-centric data. They perform impressively across languages, but that doesn’t mean they grasp cultural nuance or behavioural expectations.

Translation isn’t localisation. Speaking a language correctly isn’t the same as knowing when to be formal, when to show deference, or when to exercise restraint. Tone, timing, and formality are culturally constructed.

An AI assistant that’s optimised for American norms might interrupt too quickly in Japan—or sidestep topics that, in another context, are essential to address. What’s seen as helpful in one country might be interpreted as evasive or inappropriate in another.

We’ve long accepted that brands need regional voices. Why shouldn’t AI?

Model Builders Are Starting to Pay Attention

For a while, it seemed this challenge was being overlooked in the rush to scale. But lately, signs of change are emerging.

At VivaTech 2025, I listened to a keynote by Jensen Huang, CEO of NVIDIA. To my surprise – and relief – he raised many of the same questions that had been circling in my mind. His talk focused on digital sovereignty and global ecosystems, and he laid out a vision in which AI models aren’t one-size-fits-all, but rather regionally grounded, culturally intelligent, and ethically plural.

He introduced NeMoTron, NVIDIA’s initiative to support the development of open, adaptable, region-specific large language models. It marked a significant shift – not just technically, but ideologically.

Rather than striving for a universal model that behaves the same everywhere, NeMoTron is built to be adapted, linguistically, behaviourally, even ethically, to suit local needs. It can be fine-tuned on proprietary data, embedded with national regulatory frameworks, and enriched with historical knowledge often missing from Western-centric training sets.

When paired with Perplexity, a partner focused on grounding AI responses in local sources, linguistic norms, and real-time cultural cues, we start to see a future that’s not just intelligent, but pluralistic by design.

From Brand Strategy to Model Design: A Common Lesson

In many ways, this mirrors what global marketers have long understood. The most successful global brands don’t impose one voice across every market. They build a strong, consistent identity, but they allow that identity to flex and adapt.

The same principle should apply to AI.

The goal isn’t to splinter AI into dozens of incompatible systems. It’s to build modular, configurable, and culturally fluent AI experiences that reflect the richness of human difference.

That means:

Rethinking alignment – not as a universal doctrine, but as something to be co-created with local communities.

Designing model behaviour – not just training data – with cultural values in mind.

Embedding localisation not just in content, but in the AI’s tone, persona, and ethical framing.

Toward a Culturally Attuned Intelligence

AI is fast becoming a companion technology – a tool we speak to, and that speaks back. As it takes on this role, the question isn’t just whether it understands us, but how well it reflects who we are, where we are, and what matters to us.

Cultural nuance isn’t an afterthought. It’s the foundation of trust, usability, and relevance.

If AI is to earn its place as a meaningful presence in everyday life, it must learn to do what any respectful guest in a new place does: listen first, speak with care, and adapt to the room.

It’s not enough to build intelligent systems.

We need to build attuned ones.

At the Google I/O conference last week, a discussion brought together filmmaker Darren Aronofsky, DeepMind CEO Demis Hassabis, and director Eliza McNitt to explore the future of storytelling in the age of AI. It wasn’t just the new apps that sparked buzz – though there were plenty – but what really stood out to me was Ancestra, McNitt’s latest film and the first outcome of the Primordial Soup x DeepMind collaboration.

In a setting like this, the conversation could have easily fallen into familiar binaries: hype versus fear, replacement versus enhancement. But instead, it spotlighted something subtler – and far more important. Through Ancestra, McNitt showed that AI isn’t dragging creativity down. It’s lifting it into new dimensions. 

What interested me most wasn’t how AI can replicate what we already know how to do, but how it lets us do things we’ve never done before. 

And that’s where I found myself repeating a thought I keep returning to: there is no such thing as “best practice” in AI-assisted storytelling – only “best fit.” That mindset, I believe, matters far beyond filmmaking.

Ancestra. The Story Begins.

Ancestra is a short film inspired by the day McNitt was born – an emergency C-section that nearly claimed both her and her mother’s lives. It’s a story that is both intimate and difficult to visualize. Without generative AI, the film would have struggled to render the metaphysical, embryonic, and cosmic dimensions of that moment. But with early access to Google DeepMind’s Veo 3 and Flow tools, McNitt was able to reimagine the invisible forces surrounding her birth – cosmic imagery, symbolic renderings of life formation, and scenes that go far beyond what any camera could capture.

“To be honest, I never thought about telling the story of the day I was born – but here we are.”

The film was created using a hybrid pipeline: live-action performances by SAG-AFTRA union actors, full film crew production, and AI-generated videos.

What makes Ancestra meaningful isn’t the unique collaboration or novelty of the technology. It’s the contextual precision with which the technology was used.

Making the Invisible Visible

One of the film’s most powerful creative decisions was the creation of a digital baby – Baby Eliza. Instead of using a real infant on set, which sometimes raises both ethical and logistical challenges, McNitt trained Veo on photographs of herself as a newborn. These were taken by her late father, a renowned photographer, and used to generate scenes that felt emotionally authentic and personally resonant.

To deepen that emotional fidelity, McNitt used a style-transfer tool to infuse the output with her father’s distinct photographic style. The result was not just technically impressive – it felt like the scenes were shot by someone who loved her. In doing so, McNitt extended her family’s artistic legacy into a future-facing medium.

She also used AI to visualize sequences that would be nearly impossible to capture with traditional tools – like a baby’s heartbeat in utero or stylized representations of cellular life and cosmic metaphors. In Ancestra, generative video became not just a visual aid, but a poetic lens for memory, imagination, and emotion.

Start with the Story, Not the Tool

What’s clear from McNitt’s approach is that she didn’t begin with the question “What can Veo do?” She began with a story only she could tell. The technology followed the narrative – not the other way around. That distinction matters.

AI should never dictate creative direction. It should amplify the storyteller’s intent. And what works for one story may not work for another. The way AI fits into the creative process is entirely dependent on the context – the story, the team, the moment, the constraints. In Ancestra, AI was used sparingly, intentionally, and only when it added emotional or narrative value.

AI Didn’t Reduce the Need for a Team – It Transformed the Roles

McNitt described the creative process as “a lot of nightmares” at times – referring to the unpredictability and rough edges of working with early-stage generative models. But rather than resisting that chaos, she leaned into it. She treated it as an expressive medium, not a polished product. Her job as a filmmaker was to shape and interpret the outputs, not expect perfection from them.

“It’s been very interesting to create and see what comes out when you embrace that chaos.”

The production of Ancestra involved over 165 people, including 15 dedicated “generators” – artists who guided Veo’s outputs. This marks a shift in how we think about creative teams. Prompt engineers, AI visualists, and model trainers are becoming as integral to the filmmaking process as cinematographers and editors. McNitt didn’t reduce her team – she redefined it.

Rethinking the Creative Process in the Age of AI

What McNitt’s process reveals is that generative AI doesn’t come with a playbook. You can’t Google your way to meaning. You can’t outsource intuition. Creative judgment still comes from the human – what to keep, what to discard, what to shape, and what to feel.

And as AI tools move further into the worlds of writing, music, design, advertising, and architecture, the temptation will be to chase standardization. To build templates. To copy what worked elsewhere. But the lesson from Ancestra is this: AI isn’t a shortcut to creativity. It’s a prompt for maturity.

There is no right way to use AI. Only a right-for-this way. The only real “best practice” is knowing your intention, your audience, your story – and using AI only when it serves those things.

Human storytelling is not a protocol. It’s a pulse. AI should follow that beat – not override it.

There’s no doubt the role of creatives has changed. And it continues to evolve rapidly with each wave of technological advancement. What we’re experiencing today goes beyond a shift in tools or techniques. It feels like a fundamental redefinition of what it means to shape communication, storytelling, and culture.

Historically, every major technological leap has reshaped not only what we create but how we create it. And, just as importantly, how creatives participate in that process. It’s no longer only about having new tools at our disposal, but about where creative judgement sits, how it’s applied, and how it’s evaluated.

Let me try to explain by going back for a moment.

Well, way back.

In the pre-digital era, we shot and edited commercials on film. It was a time-intensive, almost ritualistic process. We’d review rushes, mentally catalogue shots, scrutinise takes frame by frame, and wait hours, sometimes days, for a new cut. Creativity back then was slower, more linear, and physically bound to the constraints of the medium. We respected the craft deeply. There was a sense of reverence and distance between thinking and making.

Then digital changed everything. Suddenly, we could work faster, with more fluidity and collaboration. Editing became open-ended. We could experiment freely, explore multiple versions, and adapt executions for different markets and formats with greater ease. Digital tools didn’t just streamline production; they pulled creatives closer to the act of making. Iteration became part of the process. The feedback loop shortened. We gained agency.

Now, with the rise of GenAI and neuro-powered analytics, we’re entering a new phase of transformation. But this one feels different. This isn’t simply about working faster or producing more. It’s about intelligence. We now have the ability to anticipate – to know before we act.

AI can predict how a piece of creative might perform in terms of attention, emotion, and memorability, even before it goes live. Tools powered by neurological data and historical brand performance are reshaping how we plan, produce, and assess creative work. And that shift is significant.

A recent article about Dentsu’s Measurement Engine, which brings AI and neuroscience together to evaluate and optimise creative assets, stopped me in my tracks. What struck me wasn’t just the sophistication of the tech or the fact that agencies are already putting it to work. It was what this signals for our role as creatives. The conversation has shifted, from what we’re making to how our role is being redefined. Our value still lies in the craft, but increasingly, it’s in how we engage with systems of insight and translate data into creative action.

So, here’s my reflection on where we are, and where we might go next.

The Role of the Creative Is Expanding

#1. The creative brief is no longer a fixed starting point; it’s a living, evolving input.

What was once a static document shaped by strategy and intuition now draws from live data and predictive insight. Creatives no longer work in isolation from performance signals. We’re working within environments that forecast how audiences are likely to feel, notice, or remember an idea before it’s even produced.

The brief now behaves more like a hypothesis—something to test, adapt, and evolve. Brand history co-authors the ideation process. What worked last quarter becomes a reference for what might land next week. The brief is alive, and it changes alongside the ideas it sparks.

#2. Art directors and copywriters are blending storytelling with system thinking.

The creative instinct is still there. But now it sits alongside real-time behavioural data, emotional resonance scores, and predictive modelling. Today’s creatives are expected to navigate dashboards, interpret heatmaps, and consider how cognitive load might shape audience recall.

Craft still plays a central role, but it’s increasingly accompanied by evidence. And rather than diminishing creativity, this might make it more accountable, more iterative—and potentially, more impactful. That remains to be seen. But we should stay curious.

#3. Producers are becoming architects of adaptive content ecosystems.

Production isn’t a finite process anymore. It’s modular, responsive, and continuous. Producers today manage pipelines that account for versioning, localisation, live signals, and performance-led adaptation.

The scope has expanded. Producers are becoming systems thinkers, one who orchestrate content networks that evolve as they move. They will be facilitators of scale and guardians of consistency, managing the delicate balance between central control and local relevance.

#4. The creative toolkit now includes neuro-insight dashboards and predictive platforms.

Tools like Dentsu’s Measurement Engine combine EEG, eye-tracking, cognitive scoring, and machine learning to provide creatives with predictive feedback at the concept stage. It sounds impressive, and it is, but it also presents new responsibilities. Creatives must now learn to evaluate layouts, visuals, and scripts not only for narrative clarity, but for emotional lift and projected recall.

We can now compare two headlines not just for voice or tone, but for predicted memorability. That doesn’t mean reducing creativity to numbers. It does mean expanding our confidence in decisions through foresight.

#5. Creative instincts aren’t being replaced, they’re evolving with earlier, sharper feedback.

There’s a persistent myth that AI flattens creativity. But used well, it can sharpen it. When creatives get timely feedback on emotional or behavioural signals, they can experiment with greater clarity, and iterate without the waste of blind rounds.

Intuition still matters. But in this new context, it becomes informed by foresight as well as hindsight. That’s a different kind of creative strength.

Collaboration Is Evolving, Too

#6. The creative team now includes data scientists, AI engineers, and behavioural analysts.

Our circle has expanded. We’re working with those who build the systems that shape our decisions and measure our outcomes. This means learning new collaborative behaviours, interpreting data narratives, translating technical input into brand meaning, and working with KPIs as shared goals, not external constraints.

The work doesn’t just have to be good. It has to be explainable, traceable, and tuned to context.

#7. Transcreation has become cultural intelligence at scale.

Transcreation today is no longer confused with translation. With the ability to measure emotional resonance by market, we’re designing frameworks that adapt by intent. Modular systems allow local teams to interpret the work meaningfully, without starting from scratch.

It’s not about creating uniformity. It’s about giving teams the raw materials to build culturally relevant expressions that still ladder back to a shared idea.

#8. Real-time iteration is a core creative capability.

Once a campaign goes live, it doesn’t conclude—it enters a new phase. Assets can now be adapted mid-flight. Messaging can be reshaped for new platforms or audience groups on the fly.

Designing with this elasticity in mind isn’t an add-on. It’s part of the brief. Creatives must think in systems, build flexibility into their work, and prepare assets that can shift with signals.

#9. Every creative output feeds into brand intelligence.

Each piece of content contributes to a larger feedback loop. Assets become more than moments, they become signals. What performs well can be reused, remixed, or scaled. What underperforms teaches us what to avoid.

Creativity now fuels a learning system. It’s not just storytelling, it’s a strategic asset that evolves with every piece we put into the world.

One caution though, inserting creative judgment too early in the process, could prevents the system from exploring beyond human convention. So, knowing when to step in would be key and is an area that needs further exploration. 

#10. The creative mindset needs to prioritise outcomes over outputs.

We’ve long celebrated the “hero visual” or final execution. But today, one idea might need to exist in 50 or more versions, spanning platforms, moments, and audiences.

Creative excellence isn’t only about originality or craft. It’s about consistency, relevance, and responsiveness over time. Performance isn’t the enemy of creativity—it’s part of its purpose.

So, What Should We Do Differently?

This all sounds exciting. But it also demands change, that covers culturally, operationally, and creatively. Testing and using new tools, yes. But more importantly, rethinking who’s in the room, when they’re invited, and how we work together.

On top of my mind, here are four places we can start:

1. Redesign the brief as an intelligent, evolving object.

The creative brief should serve as an input into a broader feedback system. Frame hypotheses, identify outcome-based metrics, build versioning plans, and include signals that matter. Let the brief guide decision-making from concept through to performance analysis.

2. Treat production as a system for flexible deployment.

Every asset should be built with adaptation in mind—across platform, market, and audience. Tagging and metadata should be standard. Producers and creatives need to understand versioning infrastructure and design for variation, not just delivery.

3. Bring in broader collaborators earlier.

GenAI encourages cross-disciplinary thinking. We should involve data strategists, behavioural experts, and AI leads at the start of creative development. Don’t bolt insight on after the fact—build with it from the beginning.

4. Reskill creatives for iterative deployment and performance fluency.

The idea of a big reveal is fading. Creatives need to write with range, design with flexibility, and think in adaptive structures. Performance feedback should be seen as fuel, not friction.

Phew. So, what’s next? You might ask…

If everyone is creative, then every creative today is, in some way, also a scientist. We’re becoming hybrids. Part imagination, part interpretation. Maybe even “brand model trainers” or walking “large creative models.” (There’s a headline in that somewhere.)

Creatives will become “Mixture of Creative Experts” (MoCE)

Jokes aside, this requires a shift – in how we think, how we make, and how we lead. It doesn’t happen overnight. But it does start with embracing the complexity.

Creativity still matters. Perhaps more than ever. But how we get there is changing. Instinct still plays a role. Now it works in dialogue with data, tools, and systems that help us learn faster, respond smarter, and create with greater purpose.

The machines might show us the map. But the meaning, the shape, the emotional depth, that’s still ours to craft.

And that, I believe, is where the real power lies.

We’ve always wanted to do it.

To create work that’s cohesive across every channel — from print to film to social — without having to brute-force it into consistency later.

To bring local teams into the creative process at the beginning, when it still matters, rather than tagging them in just before the deadline and asking them to “transcreate.”

To finally bring localisation into the heart of production — not bolted on at the end, but baked in from the beginning, so every market’s version can take shape as the content takes shape.

To design with adaptation in mind — not as an afterthought, but as a core principle.

But the truth is, until recently, the tools didn’t exist. Or they existed, but not at scale. And so, we got good at compromise. We got clever at fixing things late. We built processes around silos, because silos were safe.

Then came GenAI. And suddenly, the thing we’ve always wanted — that orchestration of content across markets, mediums, and moments — doesn’t seem impossible anymore. 

It may not be perfect yet, but it shows signs of possibility.

The instinct, of course, is to use the tech to speed up what we already do. Swap a synthetic voice in for a voice actor. Use AI to generate subtitles. Get three versions of a script instead of one.

It’s tempting. It’s useful. But it misses the point.

Because the real value of GenAI isn’t that it makes the existing machine faster. It’s that it lets us build a new machine entirely.

The Shift: From Tasks to Thinking

We don’t just need a more efficient workflow. We need a new kind of workflow — one that reflects how people consume content now.

Fragmented. Fast. Fluent across channels.
Personal, not just localized.
Relevant, not just repurposed.

In this new model, production isn’t linear — it’s layered.

Planning becomes platform-aware. Scripts are seeded with multilingual intent. Slogans written for print evolve into voiceovers. A shot designed for the hero film becomes a still for a product page, a loop for TikTok, or a background for a display ad.

The assets don’t just work harder — they work together.

AI doesn’t replace creativity here — it scaffolds it. It gives global teams a starting point, not a finish line. It lets us think modularly, culturally, and strategically at the same time. If anything, it puts the human imagination more firmly at the centre — because now we’re not just solving problems. We’re designing systems.

Start Where It Matters

So, I’ve started mapping out a living workflow — not a fixed blueprint, but a prototype. A draft for what global content production could look like when AI becomes a true creative partner.

It starts with integrated planning, where format, market, and message are aligned from the outset. Not just what to say, but where, how, and for whom. Not just one campaign, but all its potential versions. Not just global, but global-ready.

This framework breaks down the production cycle into four evolving stages — Planning, Pre-Production, Production, and Post-Production — with outcomes and roles clearly defined for each. It’s illustrative so it may not be perfect. But it’s adaptable. 

From there, pre-production becomes the foundation of adaptability. We use AI to generate multilingual script variants early, build asset libraries that are inherently cross-format, and design storyboards with different channels in mind. Every part of the creative process becomes an input into a wider system — a flywheel, not a funnel.

In production, we think in modules. A performance that works for the hero spot also works for the bumper. A product demo becomes a still image with a CTA. Synthetic voice tracks run alongside human ones — not to replace them, but to offer options. And AI tools help us localize visually in real time.

Then in post, we scale. Smartly. AI engines recompile edits by platform. Dubbing, subtitling, and cultural nuance are handled in hybrid — machine speed, human oversight. We don’t localise at the end. We finish at the end. And we feed what we’ve learned back into the machine for next time.

Because that’s the thing. The workflow itself isn’t static. It’s a work in progress — a living document. Because the tools are changing, the platforms are changing, and our ambitions should be changing too.

Build the Muscle, Not Just the Machine

If there’s one principle to hold onto, it’s agility.

No two projects will use the same tools in the same way. What works for a regional retail rollout won’t work for a global brand film. And that’s okay. The goal isn’t to lock in a perfect process. It’s to build a flexible one.

That means building cross-functional teams that speak the same language — creative, data, AI, strategy.
It means investing in brand-specific training data, so AI outputs aren’t generic but grounded.
It means testing new tools in low-risk environments — subtitling, B-roll, social variants — and then scaling what works.

And above all, it means thinking differently.
Not just faster.
Not just cheaper.
But better.

I’ve Never Felt So Excited About What Comes Next

I’ve said it to colleagues again and again: I’ve never felt so excited about the changes happening in global production.

We’re standing at the edge of a new kind of production — one that’s not just about making things, but about designing systems that make possibilities real.

If we get this right, GenAI won’t just help us do what we already do a little better.
It’ll help us finally do the things we’ve always dreamed of — the things we knew were right — but never had the tools to make happen.

And the best part?

We’ve only just begun.

Language, once considered a domain of human interaction and expression, is now a critical operational layer that permeates branding, marketing, product development, knowledge management, customer service, and even internal collaboration. The advent of large language models (LLMs) has amplified this shift, enabling brands to leverage language as a dynamic tool for efficiency, innovation, and connection.

The start of 2025 brings in new goals and expectations across different aspects of language operations. With rapid advancements in technology, particularly in generative AI, we’re witnessing a fundamental shift in the way global content is created, adapted, and localised. Over the past year, I’ve been energized by these developments, not merely because of their efficiency or cost-saving potential but because they challenge us to rethink the creative and operational frameworks that underpin global branding and communication.

A Moment of Reflection: Lessons from Transcreation’s Rise

Reflecting on the early days of transcreation, I’m reminded of the transformative conversations that reshaped how global brands approached global content adaptation. At that time, the idea of producing local versions of global campaigns in a centralized hub was both refreshing and disruptive. It spurred a cascade of innovations in team structures, asset management, and centralized production workflows. Allowing global brands to appoint independent creative agencies without a network, and at the same time creative hotshops have the capabilities to win and retain global clients from one single office. These discussions – focused on balancing quality with efficiency – laid the groundwork for the centralized and scalable systems many brands rely on today.

Now, with the rapid development of generative AI, I sense a similar moment of transformation -one that holds even greater potential to redefine the disciplines of language and content creation.

Here’s why:

Generative AI: A Catalyst for Change in Global Content Creation

Generative AI offers capabilities that challenge traditional silos in global branding and language operations. The technology is not just a tool for automation but a platform for reimagining collaboration, creativity, and cultural relevance. Key areas where I envision generative AI driving innovation include:

1. Decentralized and Collaborative Ideation

Generative AI allows for global creative platforms to be ideated, conceived, and refined in any market, language, or culture—and in real time. This is a profound shift from the historically English-centric approach to global campaigns.

Collaboration tools enhanced by AI also facilitate smoother communication across departments and geographies, breaking down silos and fostering innovation, enabling creatives from diverse markets to articulate big ideas and anticipate challenges in adaptation. By empowering talent in any region to lead, we’re moving toward truly “global-ready” creative platforms where ideas can flow bidirectionally—whether originating from Tokyo, China, São Paulo, or Nairobi.

2. Blurring the Lines Between Translation, Transcreation, and Localisation

Generative AI’s ability to produce culturally nuanced and fluent language outputs is blurring the distinctions between these disciplines. What I’ve long referred to as “creative adaptations” is finally becoming a unified process. Foundation models, powered by brand-specific data, are already producing more coherent outputs across creative and technical content.

Key developments include:

  • Integrating brand terminology at the system level, ensuring consistency across all languages and content types.
  • Implementing supervisory agents within agentic workflows to maintain alignment with a brand’s tone, voice, and cultural context.
  • Grounding outputs in proprietary knowledge, creating more seamless integration across creative and technical writers and teams.

The result? Greater coherence and cultural sensitivity across all touchpoints.

3. Foundation Models as Living Brand Guardians

Traditionally, brand guidelines have been static documents—invaluable but cumbersome. Generative AI enables the creation of dynamic, living brand style guides, grounded in “brand truth” and continuously refined with proprietary data. These AI-driven guidelines act as virtual partners, providing:

  • Real-time feedback on language and multimodal content creation.
  • Dynamic adaptability to changing market contexts or evolving brand narratives.
  • Enhanced consistency in tone, design, and cultural relevance across platforms.

This approach transforms static guidelines into an evolving resource that grows alongside the brand.

4. The Rise of Branded Conversational Interfaces

As generative AI evolves, brands are becoming increasingly conversational in their tone of voice. The next generation of customer-facing chatbots will be “branded customer agents,” and will be considered as brand ambassadors in their own right, serving as the primary touchpoint in customer journeys. Unlike traditional chatbots (and “chatbots” won’t be the right term to justify their significant role), these agents will:

  • Reflect the brand’s personality and tone of voice, shaping perceptions in real time.
  • Replace traditional corporate website hierarchies, allowing users to access information or services via natural language queries.
  • Create seamless, human-like interactions that enhance customer experience and deepen brand loyalty.

This shift will redefine the role of brand websites, transforming them from static repositories into dynamic, conversational platforms that adapt to each user’s needs.

Looking Ahead: Opportunities and Challenges

While the promise of generative AI is vast, it’s essential to approach these advancements with a balance of optimism and critical thought. AI is a double-edged sword in our industry – it empowers us to push boundaries, stretch production possibilities, and localize content at scale, yet it also raises critical challenges around intellectual property, ethical use, and fair remuneration.

Yet, as we’ve seen in past industry evolutions, the challenges are often the catalysts for innovation. The integration of generative AI into language operations is an opportunity to reimagine not just how we create and adapt content but how we connect with audiences across cultures, languages, and platforms.

As we step into 2025, I’m excited to see how these trends unfold and to be part of the conversations shaping the future of language in branding. 

Let’s chat. 

The adage “Genius is one percent inspiration and ninety-nine percent perspiration” remains as relevant today as ever. However, the nature of this “perspiration” has evolved. 

Today, a great creative idea also demands…

brilliantly crafted execution…

well-planned production…

nuanced localization…

and a touch of AI-enhanced generation.

This blend of human creativity and artificial intelligence opens a new chapter in the pursuit of exceptional content, where the breath of AI’s capabilities harmoniously meets the depth of human insight.

Within the creative community, there are mixed feelings about artificial intelligence, specifically Generative AI.

Some believe AI can enhance the average creative’s ability to create emotionally resonant work by drawing from a vast database of cultural and formal references. They argue that AI-driven content often has wide appeal because it is based on extensive data, including popular trends and successful design elements. This can result in creations that resonate with a large audience, aligning with familiar and well-received concepts.

Quote from: https://www.archdaily.com/1012281/how-ai-will-make-everyone-a-better-designer-for-better-or-worse

Some think AI often provides “cliché” solutions, implying that the same extensive database enabling AI to create broadly appealing content can also lead to predictable and formulaic outputs. Since AI relies on patterns learned from existing data, it may often reproduce common or “safe” solutions, lacking the edginess or innovation that comes from human intuition and risk-taking.

Quote from: https://creative.salon/articles/features/cso-fight-ai-edition-bbh-will-gregor

These diverse viewpoints highlight two distinct perspectives, each with its own merits and concerns.

AI as a Catalyst for Enhanced Creativity

Expansive Reference Database. AI’s ability to draw from an extensive array of cultural and formal references can significantly augment a creative’s capacity to generate ideas that are culturally resonant and emotionally compelling, especially valuable in a global context.

Efficiency and Innovation in Design. As some suggests, AI can mimic human design patterns, producing work that consistently appeals to a broad audience. This aspect of AI can be seen as a tool for enhancing the creative process, allowing designers and creatives to explore new combinations and iterations swiftly.

Tool for Ideation and Exploration. AI’s role in the creative process can be likened to a “motorbike for the mind”. It streamlines certain aspects of creative work, such as the generation of ideas and the exploration of diverse creative paths, thus potentially expanding the range of concepts a creative can explore.

AI as a Proponent of Clichéd Solutions

Challenge of Originality. A core concern is AI’s current inability to innovate in the same way humans do. Its reliance on existing data may lead to outputs that are more derivative than ground-breaking, raising questions about the originality and uniqueness of AI-generated content.

Cliché and Commonality. The vast database AI draws upon can result in ‘cliché’ or overly common solutions. This is because AI tends to propose solutions that are statistically more likely, based on its training data, which may not always align with the need for fresh and unique creative expressions.

Uniformity in Creative Outputs. With widespread access to AI tools, there’s a risk of homogenization in creative outputs. As AI systems are often trained on similar datasets, the range of outputs may converge, leading to a lack of diversity in creative ideas. This necessitates a re-engagement with human ingenuity and a search for uncharted creative territories.

What’s “Lovable” Could Also Be “Universal”

So far, I am more inclined towards appreciating the capability of AI in generating “lovable” content. 

It’s true that with many creatives using similar AI tools, there’s a risk of a uniform style emerging in the industry. This could lead to a saturation of similar ideas, making it harder for brands and creatives to stand out. AI, as a tool, also lacks the capabilities to understand the subtleties and deeper cultural nuances that seasoned creatives instinctively integrate into their work. This can result in content that, while technically competent, lacks the depth and richness that come from human experience and insight.

But perhaps the “objectivity” of the output will allow for even better human input on the final work. Personally, as my work often deals with identifying the “universal truth” of a brand, given the global scope of the data AI can access, its outputs can incorporate diverse cultural elements, making the content more inclusive and resonant across different demographics. This is particularly valuable while creating a global platform for further local adaptation.

Balancing the Perspectives

In balancing these viewpoints, it’s crucial to recognize that AI, in its current state, is a tool that complements rather than replaces human creativity. 

The key lies in understanding AI as a tool that requires human guidance and input to create truly impactful work. While AI can efficiently generate content that is statistically likely to be popular, it requires the creative’s expertise to add the unique edge and depth that prevent the work from being merely cliché or populist. This involves:

Selective Integration: Using AI for initial ideation or routine tasks, while reserving the final creative decision-making for humans who can inject originality and cultural sensitivity.

Pushing Boundaries: Creatives should be encouraged to use AI outputs as a starting point, not the end goal. Pushing beyond the AI-generated ideas to explore more avant-garde or niche concepts can ensure that the work retains an edge.

Customized AI Training: Tailoring AI’s training data to include more unconventional, niche, or culturally specific content can help in generating more diverse and less clichéd outputs.

So, nothing is absolute, as they say. AI in creative work is best utilized as a collaborative tool that augments human creativity, rather than a standalone solution. It’s the synergy between AI’s efficiency and human creativity’s depth and edginess that will lead to truly resonant and innovative creative work.

The past 12 months have been the exploration phase of Generative AI, with creatives across various disciplines experimenting with its capabilities, pushing boundaries, and envisioning possibilities. From AI-generated art to personalized marketing content, we’ve explored novel applications. While the initial “wow” factor of AI-generated creations captured attention, this year demands tangible value. This means focusing on how generated content impacts audiences, drives outcomes, and solves real problems. This year, creatives must transition from exploration to execution, shifting the focus from “what if” to “what works.” And to some brands, it means on a global scale.

Strategic Integration of GenAI Tools into Creative Processes

As we stand on the precipice of a new challenge – delivering tangible value through Generative AI tools – a strategic approach to integrating these tools into our creative processes is required. This demands not just innovative experimentation but also a clear articulation of the value they bring. Identifying areas within the creative process where Generative AI can enhance creativity, streamline workflows, and create production-grade, personalized content at scale is crucial.

An article from LBBonline titled “Is Generative AI Proving to be ‘Too’ Creative?” offers a nuanced perspective on integrating and utilizing Generative AI in the creative process. Each expert contributes a unique lens to the discussion, highlighting both potential benefits and challenges of leveraging AI in creative work. While some emphasize the necessity for critical thinking and contextual awareness, they also discuss the rapid advancement of AI technology, urging a differentiated approach to content creation based on the need for accuracy and quality. Others view AI’s imaginative output as a form of creativity, suggesting it could evolve alongside human creativity.

Across these viewpoints, common themes emerge: the need for critical evaluation, the balance between leveraging AI’s creative potential and recognizing its limitations, and the importance of human oversight and contextual understanding. However, I feel there are a few aspects that are missing.

From “Human-in-the-Loop” to “Cultural Expert-in-the-Loop”

Integrating Generative AI into the creative process requires more than just technical know-how; it demands a deep understanding of the content’s context, purpose, and audience. We often hear about the need for a “human-in-the-loop”. In the article mentioned above, Alex Hamilton from Dentsu Creatives advises, “Critical thinking, verification, and a healthy dose of scepticism are therefore essential.” He emphasizes considering the context in AI-generated content to ensure relevance and mitigate misleading outputs.

But I propose we go one step further: to ensure there is “cultural expertise” in the loop.

Involving cultural expertise in the process signifies a pivotal evolution in leveraging Generative AI for global creativity. In this advanced paradigm, human experts don’t just play a supervisory role but lead the initiative from inception, setting the standards for what constitutes high-quality output. This leadership encompasses everything from crafting nuanced prompts that guide AI in generating content, to defining and refining the brand’s tone of voice during the initial training and subsequent fine-tuning of Large Language Models (LLMs). The involvement of human expertise from the start ensures that the AI’s outputs are not only technically competent but also deeply infused with the brand’s identity and ethos.

Crucially, this expertise incorporates a profound understanding of cultural nuances, making it indispensable in today’s global marketplace. This approach mandates the inclusion of cultural consultants or experts who possess an intimate knowledge of the target audience’s cultural context. Their role is to ensure that AI-generated content is culturally congruent, sensitive, and capable of resonating positively with diverse audiences worldwide. These cultural experts provide insights into the societal norms, values, and taboos of different communities, helping to steer the content away from potential cultural faux pas and toward more inclusive, respectful, and engaging narratives.

As the AI undergoes iteration and improvement, the contribution of both subject matter and cultural experts becomes increasingly vital. They offer invaluable insights into refining the solution, effectively expanding the scope of feedback from purely technical or content-specific to encompassing broad cultural feedback. This richer, more diverse input is instrumental in further fine-tuning the algorithm, enhancing its ability to produce content that is not only of high quality but also culturally nuanced and relevant.

The iterative process of improvement facilitated by the involvement of cultural expertise ensures that the AI’s learning trajectory is aligned with evolving cultural trends and sensitivities. Regular quality assurance checks, informed by both expert critique and cultural insights, are integral to this process, helping to maintain and elevate the content’s quality, relevance, and cultural appropriateness over time. 

This model cultivates a dynamic and synergistic partnership between human experts and algorithms. It leverages the scalability and efficiency of AI while grounding its outputs in the rich, complex tapestry of human culture and expertise. Experts guide the AI, imbuing it with a nuanced understanding of cultural intricacies and brand-specific directives, thus enabling it to generate content that not only meets the technical criteria of quality but also embodies the values, tones, and sensitivities required to truly engage a global audience.

In essence, this approach represents a holistic and forward-thinking strategy for content creation on a global scale. It recognizes the indispensable role of human expertise in navigating the complexities of cultural diversity and brand identity, setting a new standard for AI-generated content that is as culturally informed as it is creatively inspired. Through this collaborative model, the potential of Generative AI is fully realized, offering content that is not just innovative and efficient but also deeply resonant and culturally attuned, continually improving to meet the highest standards of quality and relevance.

The Creatives X Machine Era

Generative AI represents a transformative force in the creative industry, offering tools that can augment human creativity in unprecedented ways. However, its effective integration into the creative process requires a nuanced approach that considers the importance of expertise, cultural sensitivity, and collaboration. By grounding the technology’s application in expert knowledge and a deep understanding of the audience, creatives can harness AI’s potential without compromising on content quality and relevance. This model emphasizes the collaborative nature of AI in creativity, where technology enhances human expertise, and together, they produce outputs that are not only innovative but also deeply resonant with the intended audience. In navigating the exciting possibilities of Generative AI, adopting a thoughtful, expert-guided approach is key to creating content that truly matters.

Note that what I have covered here focuses on the advertising and production use cases of Generative AI. In reality, the relationship between creatives across different disciplines and Generative AI is influenced by the distinct challenges and opportunities of each field. While the underlying technology might be similar, its application and impact vary widely, reflecting the unique creative processes, ethical considerations, and ultimate goals of each discipline.

So, no matter which creative disciplines you practice – be it advertising, architecture, fashion, art, music, gaming, or beyond – I would like to hear about the unique challenges, opportunities, and goals inherent to your specific field.

The concept of AI “hallucinations,” where an AI system generates information that is not based on factual data but rather on its own created narratives, is widely viewed as a significant issue in Generative AI. These outputs can be caused by various factors, such as limitations or biases in the training data, errors in the algorithms, or adversarial attacks. AI hallucinations can have negative consequences, such as spreading misinformation, causing harm, or undermining trust in AI systems. These inaccuracies become particularly problematic in scenarios requiring precise and accurate factual information in the generated output.

However, the perspective on AI hallucinations is shifting, especially in the context of creativity.

In an interview with the New York Times, Sam Altman of OpenAI observed that there’s a thin line between imagination and hallucination. This observation opens up an intriguing possibility: the evolving perception of artificial intelligence’s tendency to fabricate or “hallucinate” information, especially in creative contexts. It highlights a shift from viewing these inaccuracies as flaws to considering them as potential sources of creativity.

In a recent article titled “Hallucinating Toward Creativity” from Bloomberg Businessweek, Colin Dunn, a designer and founder of Visual Electric Co., embraces AI’s unpredictability in image generation, likening it to brainstorming where unexpected ideas can lead to creative breakthroughs. 

Some suggest that not all issues need fixing, “Sometimes hallucinations are actual features – it’s called creativity, and sometimes it’s a bug.” Microsoft CEO Satya Nadella said in a recent interview.

Anastasis Germanidis, CTO of Runway AI Inc. takes AI’s creative unpredictability a step further, by balancing between maintaining groundedness and allowing for fantastical outputs. Runway AI enables the AI to interpret and add to user prompts, leading to unique and surreal creations.

The Nuance of AI-spiration in Creativity

AI’s unpredictable nature as a source of creative inspiration is something that worth exploring. Instead of focusing on just the final generated output, we can focus on the ability of AI to generate novel, unexpected, and unorthodox ideas or concepts. 

This idea of harnessing AI hallucinations for creativity suggests a shift from seeking to entirely eradicate these inaccuracies to understanding and controlling them to foster innovation. The concept is to maintain AI’s creations where they are not entirely detached from the real world but have enough leeway to explore and generate imaginative, creative content.

The key to effectively utilizing AI in these creative processes lies in maintaining a balance. While assisted with AI in the brainstorming process, we should have the freedom to explore and create beyond strict factual confines, its output still needs to be tethered to a level of realism or practicality relevant to the specific application. This approach could potentially lead to ground-breaking advancements in how we perceive and implement AI in creative industries, opening doors to a new era of AI-assisted innovation that blends the best of human creativity with the unique capabilities of artificial intelligence.

Reminder: AI outputs should always be carefully evaluated for accuracy, relevance, and appropriateness, especially in professional or sensitive contexts. Additionally, integrating human oversight ensures that the final outputs align with the intended goals and ethical standards.

Over the past year, Generative AI has blossomed into one of the most transformative technologies of our era. Its development has been nothing short of astonishing, sparking our collective imagination and prompting industries to reconsider their approaches. From the outset, it became evident that this technology wasn’t just a passing trend; it was a paradigm shift.

Generative AI, as a versatile general-purpose technology, has showcased its remarkable adaptability. Its applications span an impressive array of sectors, offering innovative solutions to age-old challenges. From healthcare to education, agriculture to retail, Generative AI has explored every nook and cranny, promising transformative possibilities in each domain.

Consumer adoption of Generative AI has been nothing short of spectacular. Within months of its introduction, we bore witness to the birth of the fastest-growing consumer application in history, reshaping how individuals engage with technology. Simultaneously, at an enterprise level, companies swiftly recognized the potential of Generative AI. Start-ups emerged, matured, and were promptly acquired, illustrating the intense competition in this burgeoning field.

My conversations with creatives and CEOs have painted a vivid picture of the divergent emotions surrounding Generative AI. Creatives have wholeheartedly embraced the new possibilities it offers, envisioning a future where AI collaborates in the creative process. In contrast, CEOs in global production agencies experience both excitement and caution, grappling with the transformative potential of Generative AI within their established structures.

Participating in an executive program on AI offered a unique vantage point to witness Generative AI’s impact across diverse industries. From healthcare to legal, education to agriculture, each sector approaches adoption and implementation with its own unique perspective and challenges. This diversity underscores the technology’s universality while emphasizing the need for tailored strategies.

AI Spring’s Key Moments (…So Far)

To reflect on pivotal moments in the Generative AI journey, I’ve compiled key events from the past 11 months or so. While these moments aren’t exhaustive and aren’t ranked in any particular order, they shed light on how various companies and industries, closely related to my own field, have harnessed the power of Generative AI to drive innovation, transform processes, and gain a competitive edge.

Here are the highlights (and some of the busiest months):

March – April: As the rapid development of Generative AI reached its zenith, companies across various sectors demonstrated a wide array of reactions. Tech giants like OpenAI and Meta continued to lead the charge with the introduction of GPT-4 and Llama 2, catering to both consumer and enterprise demands. Meanwhile, the Government of Iceland embraced this technology as a means to enhance the Icelandic language abilities, exemplifying how even governments are recognizing its potential. Coca-Cola launched new commercial entitled “Masterpiece”. The VFX team at Electric Theatre Collective and creative agency Blitzworks used a mix of live action shots, digital effects and AI to create the commercial and its complex transitions. In the corporate world, Morgan Stanley Wealth Management announced a strategic initiative to leverage Generative AI to synthesize content, underlining its increasing importance in the financial sector. Media outlets like the Daily Mirror and the Express ventured into AI-produced content, further blurring the lines between human and AI-generated journalism. Start-ups like Anthropic and Tsinghua joined the AI race, unveiling chatbots and models aimed at fostering helpful, honest, and efficient interactions. In education, Khan Academy‘s adoption of GPT-4 for Khanmigo emphasized AI’s role in revolutionizing learning. Expedia announced an exciting new use for artificial intelligence with the beta launch of a new in-app travel planning experience powered by ChatGPT. Finally, Bloomberg‘s launch of BloombergGPT in finance highlighted how industries are building purpose-built models to cater to their unique needs. These diverse reactions demonstrate how Generative AI’s impact transcends boundaries, with companies both big and small, across various sectors, recognizing its transformative potential.

May: The rapid development of Generative AI spurred creative and advertising agencies like WPP and VCCP into action. WPP, the global advertising conglomerate, partnered with NVIDIA to harness Generative AI’s capabilities for digital advertising. This strategic move showcased how the industry was eager to leverage AI to create innovative and highly personalized ad campaigns. Meanwhile, VCCP London took a bold step by launching “Faith,” an agency dedicated to using Generative AI for creative campaigns. This agency’s emergence signaled a pivotal shift in the creative landscape, demonstrating a readiness to explore the untapped potential of AI in generating compelling and unique content. These initiatives in May underscored how the advertising and creative sectors were proactively embracing Generative AI as a means to reimagine their creative processes and stay at the forefront of innovation in a rapidly evolving industry.

June: The acquisition of Pencil by the Brandtech Group marked a strategic response to the rapid development of Generative AI. Pencil, a generative AI SaaS platform built on OpenAI’s GPT models, offered a valuable tool for generating channel-ready ads and copy. The Brandtech Group’s acquisition showcased a clear recognition of the technology’s potential to revolutionize content creation and advertising, aligning with their objectives to enhance brand strategies through AI-driven creative solutions.

September: Accenture‘s investment in Writer demonstrated a deep commitment to accelerating the enterprise adoption of Generative AI. Writer‘s full-stack generative AI platform seems to have attracted attention from companies looking for a quality and secure environment. Accenture’s investment also signalled the importance of harnessing Generative AI for enterprise-level applications, from content generation to data synthesis, as a means to drive efficiency and innovation across various industries.

The industry’s response to ChatGPT’s launch has been nothing short of remarkable. Every development in the technology has been met with a closely-knit succession of events, demonstrating how companies are eager to keep pace with AI advancements.

As we delve into these insights, it becomes evident that Generative AI’s unstoppable progress is shaping the future of technology and business in profound ways.

Generative AI’s Next Act: Focus on Business Value

The swift adoption and incorporation of Generative AI models into various aspects of our lives demonstrate that this technology is not a distant dream; it’s a reality that’s here to stay. From content creation to virtual assistants, AI is drastically changing how we work, interact, and consume information. It has the potential to reshape industries and improve productivity across the board. In fact, Generative AI is now positioned on the “Peak of Inflated Expectations” on the Gartner “Hype Cycle for Emerging Technologies, 2023”, which is marked by rapid growth, widespread adoption, and a focus on speed as a strategic advantage.

The technology has already begun transforming industries and promises to continue doing so, fundamentally altering the way we live and work. This era of AI is unstoppable, and those who embrace it with agility and innovation will likely reap the greatest rewards.

Generative AI’s initial year was characterized by the discovery and deployment of foundation models, sparking a frenzy of novel applications that showcased the technology’s capabilities. These early apps were primarily technology-driven, offering lightweight demonstrations of Generative AI’s potential. However, as “Spring” is beginning to mature, the focus is shifting from technology-out to customer-back. In this phase, Generative AI is poised to address real human problems comprehensively. We will see applications take a different approach, integrating foundation models as a component of more holistic solutions.

We are also entering a phase where we will have a keen focus on the value we get from Generative AI — exactly how it will render business processes more effective and generate new services. We will start to understand that not everything possible is useful; and not everything useful delivers true value to the business. I will explore more on these developments in our next conversation.

How do you explain a concept so new that it’s hard to articulate?

Take communicating ‘risk’ during the pandemic for example.

As the Global Travel Taskforce sets out framework to safely reopen international travel, their recommendations include the launch of a new ‘traffic light system’ – in ‘green’, ‘amber’ or ‘red’, which will categorise countries based on risk alongside the restrictions.

Ambiguous. But it can still be pretty universally understood.

Early this month, increasing evidence shows that there is a rare risk associated with the Astrazeneca jab . 

Now, this type of risk is harder to articulate. With public health in the heart of the concern, it’s even more challenging to provide clear indication of the risk level.

Experts and scientists up and down the country, and indeed all over the world, have been trying to help the public to understand the risk is ‘low’. The aim is to repair the possible damage to the confidence of the vaccine.

In order to help people to understand and conceptualise it, various experts tap onto the power of analogies to explain the idea that the vaccine is posing a ‘very low risk’. To quote a few here:

Communication can be hindered by conflicting information provided by multiple sources.

The problem also is that in high context societies, analogies are highly nuanced. Especially when one draws an analogy on the basis of superficial similarity.

Using analogies could be ‘risky’.

People ‘struck by lightning’ in one country may be better expressed as ‘hit by a sandstorm’ in another.

Or perhaps in this case, using facts remain the only but unattainable answer? 

Here are some of the stats from a recent research update:

“The clot risk from getting Covid is at least eight times greater than that from Astrazeneca jab, research by Oxford University suggest…”

“The study of half a million Covid patients found that, overall, getting the virus increased the chance of cerebral venous thrombosis (CVT) 100-fold, compare with those without coronavirus…”

“In total, 39 in a million Covid patients suffered the clot, compared with rates of five in a million for those given the Astrazeneca jab and four in a million for those who had Pfizer or Moderna…”

Hard facts and stats they are, but are they enough to clear some of the questions?

Does using data to try to get people to stop worrying about their risk always work?

How to be empathetic enough to communicate concerns?

When stats and facts are not readily available, how creative can we be in our communications?

I also begin to wonder, what will the French, Italians, Spanish, Japanese say in order to convince the public?

What do you think?

Reference: One of the research reports capturing the latest facts and evidence

Donnelly, L; Bodkin, H. 2021. Virus poses bigger risk of clots than AstraZeneca jab, says study. The Daily Telegraph. Online. 16 April (Assessed 20 April 2021)

Note: For credible source of information, always refer to official channels. Such as updates from the official website astrazeneca.com