I feel strongly that we must be skeptical of the impending artificial intelligence revolution. This tech is evolving. Quickly. And while the repercussions of certain elements remain to be seen, AI didn’t exactly come out of nowhere. In fact, it’s been around for quite some time. Most of this current notoriety stems from tech enthusiasts.
No one can know for certain how AI will impact digital marketing long term. As is the case with any emerging tech or new marketing tactic, it is our responsibility to give our clients the best possible recommendation. One critical aspect of this, we believe, is to establish ethical AI policies for our business before pushing our partners into unknown territory.
Looking before we leap
AI is having a moment right now in the tech space in part because companies are speed-reading the room and frantically slapping the phrase “AI” onto nearly every aspect of their business as a core benefit. It’s marketing. There’s huge interest in this space and everyone wants their piece of the pie.
At the rate that new technology hits businesses, it’s vital to recognize that tech companies and emerging technology hardly ever come with thoughtful ethical policies or guidelines.
First-to-market strategies rarely have time to include “What about…?” and “What if…?” scenarios. If you want to win the race, then those are the things that get sorted out later, if they get sorted out at all.
This isn’t how we want to operate. Long-term, it’s not good for us and it’s not good for our clients. In order to maintain Formada the integrity of our mission and vision, we created guidelines that we think will help us maintain the healthiest possible approach to new and emerging technologies.
How we’re evaluating our AI ethics
At Formada, we had to sit down and determine our approach to ethical AI policies. We determined three questions to direct our usage of AI:
- From where does the data input originate?
- How does bias affect the output?
- Is AI replacing undesirable or ineffectual work?
From Where Does the Data Input Originate?
Artificial Intelligence is incredibly effective at sifting through data and finding patterns, but where does the data come from? This was the first question we wanted to ask ourselves because it opens up a lot of considerations.
For example, AI art platforms often steal artists’ artwork off websites and feed them into their own platform. The artists are not compensated for their work and users can mimic an artist’s style without granting any permission. Not only is it unethical, but it opens up massive risk and liability concerns for brands jumping into AI art with both feet.
Remember, this is also about protecting your content. These platforms are predictive in nature, and the knowledge required to inform these predictions has to come from somewhere.
Is it fair for an AI platform to scrape the information you’ve written on your website and present it as its own answer? We’ve heard of digital marketing agencies who are feeding their client’s blog content into ChatGPT to more quickly write blog posts. But once it’s been fed into the platform, who owns that content?
Checking terms of service is essential. So is considering how input bias can affect its output.
How does bias affect the output?
Artificial Intelligence is programmed by people. AI also searches for easy patterns. Putting those two facts together should make it clear that AI is rife with opportunities for bias to affect its output.
An AI-generated Seinfeld TV show that ran on Twitch for weeks got banned because it suddenly started making a series of transphobic jokes.
Microsoft’s Tay — a bot that “gets smarter the more you talk to it” — quickly began spewing hateful, racist, and misogynistic output back at its users just shortly after its launch is a perfect example of how technology can take hateful language and employ it because it cannot discern what is acceptable and what is not. When bad actors put garbage in, garbage inevitably comes right back out.
AI does not have morality. That’s why already we’ve seen projects like ChatGPT’s historical figure feature come under justified scrutiny. If these glaring examples are making their way out, what more subtle biases are showing up in AI output?
AI is only good at connecting ideas. It is not capable of evaluating the “good and bad” of those ideas. To mitigate potentially disastrous results, it’s essential to make sure that there are multiple people with numerous perspectives reviewing the output for bias.
Is AI Replacing Undesirable or Ineffective Work?
The tech community is making a lot of promises about work reduction from AI. But it has been my experience — and I’d argue historically proven — that new technology that reduces work in a capitalist system will mean cutting headcount, often replacing talented people with subpar, “cost-efficient” solutions.
While there may be other industries where the uniformity of certain AI outcomes is preferred, digital marketing is a field that requires distinction in our content, campaigns, and branding.
To be perfectly, undeniably clear: We will never be interested in replacing our team with AI.
Most of the internet is already being generated by AI or overseas copywriters with dismal pay. This is not a future threat. This is the world we’re currently living in.
On top of this, it’s predicted that 90% of content on the web will be AI-generated by 2026.
The sad reality is that many corporations will always be incentivized to replace people with software, even if the quality is poor.
This doesn’t mean that every aspect of AI should be deemed unethical or would have no place in our business. Far from it.
Because we are interested in replacing undesirable or ineffective responsibilities with AI, or with software solutions. This is why we periodically turn to our team to assess whether AI or other software has the potential to improve their life.
For example, there’s no way a human can synthesize the amount of data digital marketing agencies collect. There’s simply too much data to collect and organize, giving everyone even less time to assess the data and make recommendations that clients need to improve their business performance.
We rely on technology to present that data in segments that can be analyzed. We don’t want people counting the number of users visiting the website, we want our team analyzing patterns in those user behaviors once on the website to make sure we’re providing the best user experience.
History doesn’t repeat itself, but it does often rhyme
I have an ongoing concern that we aren’t even trying to look five years ahead when it comes to the impact of these technologies. This concern comes from personal experience.
I got very caught up in the hype of social media when it first hit the scene. Its impact was instantaneous, the tech was constantly evolving, and it almost immediately was connecting people in ways I couldn’t imagine. (Seriously, did any generation prior know SO MUCH about what was happening in the lives of their former high school classmates? It’s doubtful.)
And with the bubbles of cryptocurrency, NFTs, and every other thing that the tech community has pitched as the new revolution in recent memory, it’s essential that we, even if it’s just for a split second, stop, think critically, and take judicious steps to protect our businesses and our teams.
Our goal is to elevate our clients’ brands through work that is thoughtful, original, authoritative, and therefore effective. In its current state, much of the new AI tech doesn’t meet these standards.
Here are a few examples of what I mean:
ChatGPT and AI are already being used aggressively by digital marketing agencies like ours across the world. ChatGPT is learning quickly, but it won’t be outputting different information if the inputs are the same.
If we use ChatGPT to build us a framework based on a question, it will likely output the same results for that question for 3, 5, 20 other digital marketing agencies.
Even if we rework that content, the framework will still be similar. The internet is already very homogenous, but leveraging AI too much could result in us losing what makes our brand and our clients’ brands unique.
Search Engine’s Long-Term Goals
This may be further out, but it always stays top of mind for me. Google doesn’t hide the fact, at all, that its goals are to become the Star Trek computer (i.e., “You can talk to it—it understands you, and it can have a conversation with you.”). That is their end goal. Not to give you options for answers to your questions, but to give you what they deem to be the answer.
Google Search has always been phase one. The next phase is a smart enough AI to output a singular result or answer. Where does that leave all the other businesses vying for an audience? In my estimation, ChatGPT and AI platforms are public right now because they want to test as much input and output as possible.
It’s essentially free labor to make their platforms more effective. Google has no interest or long-term motivation to give users infinite answers — they already have those. What they ultimately want to do is give users the right answer. This probably means that, eventually, certain results will get phased out altogether.
Since Google already gives the largest online entities the lion’s share of search results, it’s safe to assume that small businesses, boutiques, and organizations will become less visible over time.
The bright side of AI’s evolution
Skepticism allows us to embrace the best parts of AI, but only after they’ve been fully vetted. And just like there are ways in which people can take advantage of emerging technologies in potentially negative ways, there are positive ways in which businesses can grow in this new world.
With the rapid rate that businesses are embracing new tech, it opens up opportunities for brands to carve out their niche. Ask yourself this question: What can you do that AI cannot?
With so many digital marketing agencies leaning really hard into AI, it opens up a lane for agencies like ours to lean even deeper into the human experience in our content marketing — for the Formada brand itself and for the benefit of our clients.
AI cannot generate quotes from providers, specialists, or business owners. It cannot tell personal stories or anecdotes. Sure, it can feign human emotion or feelings, but with about the same level of personal intimacy as a greeting card. (Apologies, greeting card writers,)
AI cannot mimic the emotion of specific people in specific situations, and ChatGPT and AI will always default to the lowest common denominator in terms of how it frames a particular topic.
Since it’s predictive and its output is built on its input, it cannot provide a unique spin. It’s not built to do that. It will, however, give you the most expected and anticipated format in response to a prompt.
While you can ask it to reframe content in another light (which may be a helpful use case for AI), it will never be able to see with the clarity and the intuitive unseen factors that make us human.
What I do know is that it’s my responsibility to apply my understanding of this technology in order to best serve our clients and our team, and I continue to look forward to taking advantage of the best of what’s available to us in order to do just that.