Revisiting Our Stance On AI Ethics And The Future Of Creative Work

by Garrett Jackson

Jun 28, 2024

Posted in Uncategorized

Formada Co-Founder and COO, Garrett Jackson, discusses AI ethics.

Some creative jobs maybe will go away, but maybe they shouldn’t have been there in the first place.”

– OpenAI’s CTO, Mira Murati

It’s hard to articulate how much I hate this sentiment. I’ve seen this clip bouncing around social media this week, and the outrage it conjured has been justified. This statement is ignorant at its best, and dehumanizing at its worst.

We’ve already talked about Formada’s AI ethics policy on our blog. And while it was a preemptive decision to roll out a policy, now that we’re a year or two into a deluge of “AI-improved software,” it’s safe to say that our perspective has held up well.

To sum the AI ethics policy up, we run all AI choices through these questions:

  1. From where does the data input originate?
  2. How does bias affect the output?
  3. Is AI replacing undesirable or ineffectual work?

People-First AI Ethics

One important thing to Meghan and I when starting Formada was acknowledging that we are a for-profit private business in a capitalist society. We’re not trying to pretend we aren’t, but we know we can be different. We made sure our values, actions, and allegiances were to our team — the people who worked and gave their time to build this with us.

The opening lines of our AI policy drive this home:

Artificial intelligence (AI), like most technology, can be used in positive or detrimental ways. We hired you. That means a lot to us. We want your ingenuity, creativity, and perspective first and foremost. 

We have no interest in replacing our team with AI or software. When we hire people, it’s with a clear understanding that this role can only be successful with a human’s strategic mind.

What makes Formada unique is the quality of our work. 

Quality can mean a lot of things. To me, it means understanding language, experience, emotions, and then strategically applying that knowledge to your work. Artificial intelligence is exactly that. It imitates understanding.

I grew up working on cars with my Dad and Grandpa. They would ask me to get a particular wrench, Phillips screwdriver, or to fetch a particular socket. I used to slap my hands over my ears every time the air compressor motor would start up. Still scares me to death. But from as early as I can remember through the age of 18, I would get the tools, hold the thing in place, and move the things around. 

Now unfortunately, I have no interest in cars. I could change oil, brakes, and even swap a distributor. But I could not tell you how these things worked. I mean, I vaguely understood these things, but I was ignorant about why. It meant I couldn’t troubleshoot a noise or really do anything on my own unless it was rote and repeatable.

AI has a similar rudimentary level of understanding, mashing words together that it has learned through its neural network that often go together. It mimics, follows the simplest path, and has no depth to its understanding.

We know many agencies that have turned to AI for content and their end product has suffered because they didn’t put people first. This story about me in my parent’s garage could be feigned by AI, but it wouldn’t have been felt by the reader.

My biggest critique of opinions like Ms. Murati’s is that they approach all human work as problems to be solved. When you spend your career in mathematics and engineering, it’s a difficult paradigm to break out of. Writing is just a product and should just be produced as cheaply as possible to meet minimum requirements. There’s no artistry or human quality to it.

Our AI ethics policy focuses on how artificial intelligence can improve our work, not replace it with a subpar product.

Can AI Improve Efficiency & Quality?

Murati’s interview did course-correct a bit after that quote, focusing on the idea of AI expanding human creativity and efficiency. In that way, we are somewhat aligned. My caveat is that it has to be grounded in stringent ethics, which I’ll get to later in this post. But let me share a few examples that we as a team have approved for AI.

  • AI can be used for prompts
    • Ask AI for prompts on a particular keyword
    • Ask AI to reframe a topic
    • Ask AI for additional questions about your topic
    • Ask AI to analyze Google Search Trends and provide trending keywords within an industry or topic
  • AI can be used to encourage variety
    • Ask AI to give you variations of your already-written ad headlines
    • Ask AI for additional words or phrases (like a thesaurus)
  • AI can be used to check quality
    • Ask AI to check if there are too many repetitions in a paragraph
    • Ask AI to check the readability or reading level of a paragraph

These help our team deliver an improved end-product that is still written by a human. They can get ideas and improve their work, but without resorting to subpar content-generation. But, here’s the thing:

Even after we granted our team access to ChatGPT for the above uses, we’ve seen a dramatic decline in AI-prompts across all team members. It was novel, but in most cases, ineffectual.

AI is at its best (to me, anyway) when it is replacing work that cannot be done efficiently by a human. 

Google Analytics collects a seemingly endless amount of information about users when they visit websites. Pattern detection within this data and searching for anomalies is a huge value add because I wouldn’t ask someone to sit in front of a massive database slowly picking through it trying to find a pattern. 

This same kind of application has even been used in healthcare to find patterns and trends that would have been difficult to detect otherwise. 

The way our policy also helps us address AI usage is by asking ourselves this question:

Is AI replacing undesirable or ineffectual work?

If it’s not, then we don’t employ it.

Our Stance On AI Ethics

I love science fiction. You’d think that many people at these AI companies would too. But media literacy may be one of the other things they traded off for profit. In most science fiction stories, from Phillip K. Dick to Spiderman comics, these stories aim to show that technological advancement without a strong sense of ethics leads to disaster. 

AI is already causing significant harm to people, the environment, and in people investing in good faith.

Reject Theft

One of our hard and fast rules at Formada is that AI art is strictly forbidden. AI companies that generate art have to train their data on artists’ works and in many cases are not seeking any form of consent. 

Artists’ work is being stolen, fed into AI, and then spit back out. This has been devastating financially and emotionally to artists and we will not play a part in this.

Anecdotally, it has also made us reassess many of our software partners. Stock photography websites have become almost unusable unless you set the date filters prior to 2020. 

We don’t want any AI monstrosities sneaking into our creative so we’ve had to be on guard. We’ve also had to reassess companies that are foundational to Formada’s work because of their poor AI policy

The foundational question from our AI ethics policy is:

From where does the data input originate? 

We must have consent. The data must be sourced responsibly. And if it is not, it has to be rejected.

Reject Environmental Waste

AI could soon be the largest contributor to carbon emissions if trends continue. According to the MIT Sloan Management Review, “A single average data center consumes the equivalent of heating 50,000 homes yearly.” 

As news about the environmental impact of AI has gotten increasingly bleak, we have had to consider if the benefits of using the technology outweigh our contribution to that problem. So far the answer has been no and it seems unlikely that it will change. 

Ironically, AI was touted to be an incredible tool for sustainability. Detecting energy usage patterns or smarter, more efficient usage. Ideally, we can get back to this perspective on AI instead of granting it broad use to the entire public. 

Unfortunately, it seems unlikely that public policy will keep in step with technological advances in the same way that social media companies had.

Reject Hype

The biggest joke is the companies implementing “AI-enhanced software” that essentially is just using pre-existing technology and adding “AI” to it to cater to their stakeholders. We use a project management software that had just recently upgraded their search functionality, but with all the AI hype going on, they put it behind a paywall and asked us to pay more for “AI powered search.” It’s bullshit.

One of the things that has been most painful to watch around major sections of the tech sector is the obsession with an endless revolutionary feature

Blockchain, cryptocurrency, NFTs, and now AI all have been hyped by tech companies and sold to us as something that will mark a new moment in history. This may be appealing to investors and stockholders hoping that their bets are placed in companies that are “ahead of the curve” but it often results in the same level of technological revolution as keeping up with the Joneses. 

In many cases, most of these AI startups are not profitable and don’t really know how to make money. It feels like the same story we’ve seen ad nauseam the last ten years in tech.

I want to make this clear: There are innovative, brilliant things being done with this technology, but it’s often not the large organizations that are doing the interesting things. 

How does this work practically? We stay aware and alert of the changes that AI is making to our industry and others. For example, Google’s recent AI adoption and search engine algorithm updates related to AI. 

We are aware of these things and also of the consumer frustration being voiced. But we focus on tried and true best practices. We have made adjustments to our services to be positioned for change, but we don’t overhaul our work to match something that will amount to a fad.

But this perspective protects us. Because what if I’m wrong? What if a new AI company completely does revolutionize everything? We don’t reject technological advances, but this is precisely why we have an AI ethics policy that helps us assess change and be ready to make decisions when the time is right.

It keeps us on our toes. It keeps us honest about doing great work for our clients and being good stewards to our team. 

To Meghan and me, Formada isn’t about hype, it’s about creating something of value and providing a high level of quality. This will matter to people no matter what technology is being used. 

So here’s to an environmentally sound, worker-supporting, and effectual AI software in our future. And as far as the present is concerned, we’re here to do high-quality work for whomever is interested in experiencing impressive results and sustainable growth. Contact us to learn how.


Get in touch with the Formada Team