In a recent blog post for CMS Critic, Kentico MVP Brian McKeiver reflects on the adoption of AI technologies and tools in the digital marketing space and specifically among his agency, BizStream.
As powerful as AI is, in my opinion, content creation and digital teams are rushing headstrong into using tools like ChatGPT, Copilot, and Gemini without considering how to use them responsibly.
In regards to this responsible use of AI, Brian asks a pointed question that many of us might not be able to answer.
Does your digital marketing team have an AI Policy in use?
It turns out, Brian had an answer but not the one you'd expect from someone advocating AI policies for a team.
Mine didn’t. In fact, I’d wager that most teams haven’t even prioritized having such a policy. I believe it’s time to start thinking about implementing one.
What are the types of things you need to have an AI policy cover? Well the types of tasks that marketers on your team are most assuredly using AI for today.
write content, translate from one language to another, automate content delivery, and optimize campaign performance.
And, Brian notes that products like Xperience by Kentico are among the platforms and services rushing to add value with AI capabilities. Does this mean we should avoid AI or products leveraging AI? No! But it does mean we should be intentional about the technology adoption and have an AI policy in place within our organization.
As Brian puts it...
Adopting AI in digital marketing isn't just about leveraging new technologies for improved outcomes; it's also about acknowledging and mitigating the potential risks associated with AI's autonomy and influence. Issues such as data privacy, bias, and the chances of hallucinations (errors by a generative AI tool) pose significant challenges. Without a comprehensive AI policy, organizations risk breaching ethical standards, violating customer trust, and potentially facing legal consequences.
I love that he mentions risk, because every opportunity is an opportunity because of some risk, some opportunity cost. The AI hype machine is real and it is in the process of selling right now - selling products, services, and a future workplace and world full of AI. This also means AI service providers hope you focus more on the opportunity and less on the risk, because that will generally mean more revenue for them.
It seems to me that Brian's suggestion is to purposefully and thoughtfully make sure your team is aware of both.
[...] having a well-defined, responsible AI policy should reduce friction with AI usage. It can ensure that while organizations pursue technological innovation, they remain steadfast in their commitment to ethical practices, full transparency, and working within societal norms.
This reminds me of the discovery phase of DXP solutions that I helped define when I worked for a Kentico partner agency. The most important outcomes of the beginning of a discovery process were the following.
- What are the goals of the project? What are the key metrics the marketing team and organization can measure to know they accomplish what they set out to do? We'd have the client define 2-4.
- What are the priorities of the project? We know we can't do everything, so if we are considering if something should be "in scope", we can compare it against these priorities. We'd have the client define 5-7.
Having these goals and priorities would help everyone involved make decisions quickly and confidently because we had agreed upon points to base those decisions on.
Brian then shares a worksheet that gives teams a starting point when defining their AI policy.
My goal for this tool is to ensure that Responsible AI practices are considered when selecting third-party vendors, tools, platform features, and subscription services. I also aim for it to add governance to both our adoptions and those of our clients.
This is awesome and I definitely recommend every team start with the worksheet to either build their AI policy or get the conversation started. With an AI policy in hand a team can more easily identify risks and opportunities with AI technology but without having to bring a team to a grinding halt - the policy is the guidepost the team agreed on and doesn't need to be re-created with every conversation.
Of course, Brian cautions against setting an AI policy in stone and offers an alternative perspective.
Wrap up
I definitely recommend reading Brian's article Responsible AI Principles for Digital Marketing and Content Teams and take a look at the worksheet he has linked in the post.
While I might not be able to tell you if any given AI technology is an opportunity, letting your team adopt them without an AI policy is definitely a risk.