[script async src="https://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js?client=ca-pub-6169568552679962" crossorigin="anonymous"][/script]

Kate O’Neill on AI, Danger, & Readiness


Image this: It is 2025. Your advertising intern used an AI instrument to generate content material on your largest shopper and unintentionally included hallucinated product options and hit ship earlier than anybody might evaluation it. 

Gave you a chill, didn’t it?

Because the creator economic system races to undertake generative AI instruments, taking a pause to construct a correct content material governance needs to be the next move. 

Fortunate for us, the writer of “What Issues Subsequent” and the founder and CEO of KO Insights, Kate O’Neill, shared her knowledge on navigating the wild west of AI-powered content material creation earlier than your group faces its personal content material disaster.

This interview is a part of G2’s Q&A collection. For extra content material like this, subscribe to G2 Tea, a e-newsletter with SaaS-y information and leisure.

To observe the complete interview, take a look at the video beneath:

Contained in the trade with Kate O’Neill

Your newest e-book, “What Issues Subsequent,” addresses future-ready decision-making. Are you able to inform us how this is applicable particularly to content material threat administration?

I feel future-ready determination making is an idea or a mindset that includes a stability between enterprise targets and human values. This performs out in tech as a result of the size and scope of tech determination making is so big. And numerous leaders really feel daunted by how advanced the choice making is. 

Inside content material threat administration, what we’re is a necessity for governance and a kind of coverage to be put in place. We’re additionally a proactive method that is past simply regulatory compliance. 

The hot button is understanding what issues in your present actuality whereas anticipating what shall be vital sooner or later, all guided by a transparent understanding of what your group is attempting to perform and what defines your values.

I feel the give attention to creating strong inner frameworks will actually profit individuals relating to content material threat. And people frameworks needs to be primarily based on objective and organizational values. It is rather vital to have a very clear understanding of what it’s the group is attempting to perform and what it’s that defines their values.

Rework your AI advertising technique.

Be part of trade leaders at G2’s free AI in Motion Roadshow for actionable insights and confirmed methods to reimagine your funnel. Register now

Speaking about content material dangers, what are probably the most important hidden dangers in content material methods that organizations usually overlook, and the way can they be extra acutely aware sooner or later?

After I labored for a big enterprise on the intranet workforce, our focus was not simply on content material dissemination but additionally on sustaining content material integrity, managing rules, and stopping duplication. For instance, totally different departments usually saved their very own copies of paperwork, just like the code of conduct. Nevertheless, updating these paperwork might result in inconsistent variations throughout departments, leading to “orphaned” or outdated content material.

One other traditional instance that I’ve seen so many instances is a few form of work course of getting instantiated after which codified into documentation. However that doc represents one individual’s quirky preferences, which develop into ingrained in documentation even after that individual leaves. This results in sustaining non-essential info with out a clear purpose. And so I feel these are the sorts of issues which can be very low-key form of dangers. These are low-harm dangers, though they add up over time. 

What we’re seeing within the higher-risk stakes just isn’t having readability or transparency throughout communications and never with the ability to perceive which stakeholders are accountable for various items of content material. 

Additionally, with generative AI getting used inside organizations, we see lots of people producing their very own variations of content material after which sending that out on behalf of the corporate to shoppers or to outside-facing media organizations. And people aren’t essentially sanctioned by the stakeholders inside the group who wish to have some form of governance over documentation.

A complete content material technique that addresses these points at regulatory, compliance, and enterprise engagement ranges would go a good distance towards mitigating these dangers.

With content material methods turning into world, how regulatory variations throughout world markets have sophisticated content material threat administration, notably with the emergence of generative AI. What particular compliance points ought to organizations be most involved about?

We see this rather a lot in lots of fields of AI. We’re seeing how generative AI, notably due to its widespread use, is clashing with world rules. Particularly in areas just like the U.S., the place deregulation is outstanding, firms face challenges in establishing efficient inner governance frameworks. Such inner governance frameworks are essential to make sure their resilience in world markets and to stop points just like the dissemination of unrepresentative content material that might misalign with an organization’s values or positions, probably compromising security and safety.

We want to consider resilience and future readiness from an organization management standpoint. And which means with the ability to say, “We want the very best form of procedures for us, for our group.” And that is in all probability going to imply being adaptable to any market. Should you do enterprise globally, it is advisable to be ready on your content material to be consumed or engaged with by world markets. 

“I feel specializing in creating worth pushed frameworks that transcend particular rules is the fitting option to go.”

Kate O’Neill
Founder and CEO of KO Insights

We have to suppose proactively about governance in order that we are able to create the form of aggressive benefit and resilience that may assist us navigate world markets and altering circumstances. As a result of as quickly as any explicit authorities adjustments to a distinct chief, we may even see full fluctuation in these regulatory states. 

So, by specializing in long-term methods, firms can defend their content material, individuals, and stakeholders and keep ready for shifts in governmental insurance policies and world market dynamics.

I see that you just’re very energetic on LinkedIn, and also you speak about AI capabilities and human values intertwining. So, contemplating the stability between AI capabilities and human values, what framework do you advocate for guaranteeing that AI-powered content material instruments align with human-centric values and never vice versa?

Opposite to the idea that human-centric or values-driven frameworks stifle innovation, I consider they really improve it. When you perceive what your group is attempting to perform and the way it advantages each inner and exterior stakeholders, innovation turns into simpler inside these well-defined guardrails.

I like to recommend utilizing the “now-next continuum” framework from my e-book “What Issues Subsequent.” This includes figuring out your priorities now, partaking in state of affairs planning about probably future outcomes, defining your most well-liked outcomes, and dealing on closing the hole between probably outcomes and most well-liked outcomes. 

This train, utilized by means of a human-centric lens, is definitely the very best factor I can consider to facilitate innovation as a result of it actually lets you transfer rapidly but additionally lets you already know that you just’re not shifting so rapidly that you just’re harming individuals. It creates a stability between technological functionality and moral duty that advantages each the enterprise and the people linked to it.

“Take into consideration the stability between technological functionality and moral duty and do this in a approach that advantages the enterprise and the people which can be in and out of doors of the enterprise on the similar time.”

Kate O’Neill
Founder and CEO of KO Insights

Wanting forward, what abilities ought to content material groups develop now to be ready for future content material dangers?

Content material groups ought to give attention to creating abilities that mix technical understanding with moral concerns till this integration turns into second nature. The opposite factor needs to be proactive management and actually enthusiastic about how there’s numerous uncertainty due to geopolitics, local weather, AI, and different quite a few subjects. 

And given the uncertainty of this time, I feel there is a tendency to really feel very caught. As an alternative, that is really the very best time to look forward and do the integrative work of understanding what issues now and what’s going to matter sooner or later — from one yr to 100 years forward. 

The hot button is pulling these future concerns into your present selections, actions, and priorities. This forward-looking integration is the essence of “What Issues Subsequent” and represents the talents many individuals want proper now.

Should you loved this insightful dialog, subscribe to G2 Tea for the newest tech and advertising thought management.

Comply with Kate O’Neill on LinkedIn to know extra about AI ethics, content material governance and accountable tech. 


Edited by Supanna Das



Leave a Reply

Your email address will not be published. Required fields are marked *