Is Ethical and Sustainable AI Possible?

Posted by in Business Strategy, Digital Marketing, Sustainability tagged with ,

Image of a person standing before an array of large computers in a hazy city.
This image was not generated by AI. An actual artist was paid actual money to license it.

In this post, we explore several challenges to achieving more ethical and sustainable AI practices in your marketing. We also share several ideas on how to use AI tools more responsibly. 

Marketing technology evangelists—including a rapidly growing number of startups—promise an AI-driven utopia that will revolutionize our products and services and change how we market them. Unfortunately, this vision is marred by ethics and sustainability challenges with serious implications for humans and the planet. 

Is it possible for marketers to foster more ethical and sustainable AI practices? Let’s explore some potential ways to do this. 

Looking Beyond AI’s Marketing Hype

The shadowy realm of AI development and utilization breeds a lack of transparency and accountability regarding its environmental impact. Certain companies put their financial well-being and competitive edge ahead of any potential negative effects that AI technologies may have on the environment.

— Alokya Kanugo, The Green Dilemma: Can AI Fulfil Its Potential Without Harming the Environment?

To a first-time user, AI may seem a bit like magic. You type a prompt or upload some files and BAM!—content appears, answers are provided, images or videos are produced. The novelty of these tools for their ease-of-use and potential to save time and money is undeniably attractive to many. 

Plus, fledgling AI startups are all too happy to try and cash in on your excitement. For example, check out one of dozens of messages we receive every day trying to convince us to let AI run our agency:

With Next Gen A.I. Automations, you can:

  • Create stunning websites and engaging videos effortlessly. 
  • Craft compelling voice-overs and eye-catching art images. 
  • Generate effective social media posts and ad copies. 
  • Develop engaging content and blogs at scale. 
  • Plan comprehensive business and marketing strategies. 
  • Generate leads and manage marketing campaigns seamlessly. 

Whether you’re a seasoned marketer or just starting out, our platform empowers you to scale your business and maximize your earnings. Ready to take the next step? Join us today and start building your empire with [Your Company/Brand]

Over the past year, we’ve seen a significant rise in the number of these messages via email, contact forms, and social accounts. Data privacy and informed consent issues aside, in this instance they didn’t even bother to add a company name to the AI-generated campaign template. 

The web is already clogged with tons of inaccurate, low-quality spam messages. AI just turned on the firehose for billions, or even trillions, more. The quality control challenges are significant. And it’s only going to get worse. 

More importantly, it’s now also much harder for marketers to engage people via the web in meaningful ways. Is this race to the bottom what we really want for marketing? Moving forward, to create shared value among stakeholders, we’ll need to redefine success that centers both sustainability and responsibility in our efforts.

Below are some of the most common ethics and sustainability challenges with AI. 

AI’s Sustainability Challenges

AI can be used to inform sustainability efforts within organizations, which shows promise. However, AI processes themselves use massive amounts of resources which undermine sustainability efforts. 

What’s more, few legislative guidelines currently exist to help companies use these new tools in more sustainable ways. This is especially important as:

  1. We’re constantly encouraged to fold AI’s various features into our daily lives on both a personal and professional level. 
  2. Ecosystem destruction and the climate emergency are an existential crisis for humanity. 

Like so many issues related to technological progress, we need to find a more responsible way forward. Some of the challenges include:

1. AI Drives Up Emissions and Energy Use

Some estimates claim a ChatGPT prompt has a carbon footprint 20 times the size of a traditional Google Search. What’s more, Microsoft saw its power usage rise 30% over the course of four years due to data center expansion, necessitated by increasing demands, much of which was due to AI.

Plus, due to a large North American construction bump, data center capacity increased from 2,688 MW to 5,341 MW in just one year. In the U.S. alone, data center electricity demand could double by 2030, driven largely by the increased computing power requirements of AI. Additionally, OpenAI and Microsoft plan to build a $100 billion dollar AI data center to open in 2028. 

2. AI is Thirsty

As Google and Microsoft prepared their Bard and Bing large language models, both had major spikes in water use — increases of 20% and 34%, respectively, in one year, according to the companies’ environmental reports. One preprint suggests that, globally, the demand for water for AI could be half that of the United Kingdom by 2027. In another, Facebook AI researchers called the environmental effects of the industry’s pursuit of scale the ‘elephant in the room’.

— Kate Crawford, Nature, Generative AI’s Environmental Costs are Soaring — and Mostly Secret

Similarly, the stats on AI’s need for water—mostly used to cool processors and generate electricity—are equally sobering. For example, by some estimates, training Chat GPT-3 used around 700,000 liters of water

Large AI companies go to great lengths to downplay these statistics in light of already widespread consensus on the web’s environmental impact. Currently, there is not enough public data available for this or many other challenges in the sector. This underscores the need for more transparency and accountability overall. 

3. AI Accelerates E-Waste

The global e-waste crisis is already staggering. Collectively, we generate about 62 million metric tonnes per year, a number that is rising rapidly as emerging technologies like AI and other innovations take hold. Much of this waste comes from developed countries in the Global North yet countries in the Global South are often left to deal with the problem.

Plus, less than 20% of the world’s discarded electronic devices are properly recycled. E-waste makes up about 70% of the world’s surface-level toxic pollution as well, due in no small part to hardware and end user devices ending up in landfills.

While AI can play a role in how we improve waste management, as with many challenges listed in this post, it’s also part of the problem. The exorbitant amount of computing power AI needs hastens the burnout rate of data center hardware. 

Also, many data centers dispose of working equipment not because it is broken, but because it is out of date and, therefore, deemed obsolete. To successfully move toward a circular economy, the entire sector must change its practices. 

4. Rebound Effects 

In the presence of relentless demand and prioritization of economic growth, this siloed focus on efficiency improvements results instead in increased adoption without fundamentally considering the vast sustainability implications of Gen-AI.

— Norman Bashir et. al., The Climate and Sustainability Implications of Generative AI

While marketers at AI companies focus on how generative AI can improve efficiency, it is important to consider inevitable rebound effects. This occurs when technology efficiency increases demand enough to escalate resource use, rather than reduce it, a phenomenon known as Jevons Paradox after the 19th century economist who first observed it. 

In other words, marketers who use AI to run their campaigns will inevitably just run more campaigns. At scale, these rebound effects have significant environmental implications.

Image of Tom Hanks from the movie 'The Terminal'.
AI has suggested that this author can travel internationally without a passport. Perhaps it should watch The Terminal with Tom Hanks. Image: Amblin Entertainment.

Ethical Challenges with AI

Next, AI poses considerable ethical and legal challenges to just about every industry you can think of. Exploring each in detail is beyond the scope of this post. However, we’ve outlined some of the most common issues relevant to marketers below. 

1. Misinformation & AI Hallucinations

From providing fictitious case law to lawyers to producing misleading information regarding real-life military conflicts, examples of AI misinformation or “hallucinations” are widely documented. This presents huge challenges for marketers and anyone who uses AI in their content workflows, especially if incorrect information leads to problematic or otherwise unethical behavior. 

What’s more, dangerous circumstances can arise from AI misinformation, as in the case of a man who ended his life after being encouraged by an AI chatbot to do so in order to fight climate change. It’s an extreme example that shows just how dangerous this problem can be.

2. Disinformation: Bad Actors

With the advent of AI, it became easier to sift through large amounts of information and create ‘believable’ stories and articles. Specifically, LLMs made it more accessible for bad actors to generate what appears to be accurate information. This AI-assisted refinement of how the information is presented makes such fake sites more dangerous…

— Walid Saad, AI and the Spread of Fake News Sites, Virginia tech News

Similarly, bad actors use AI tools to intentionally sow disinformation. These issues range from relatively harmless to severely problematic

When it’s so easy to game search engines, social media algorithms, or other online platforms with auto-distributed messages meant to intentionally mislead or confuse large numbers of people, it is inevitable that those weaknesses will be exploited. 

As privacy laws around the world become more stringent, AI puts our data increasingly at risk. Two common scenarios drive this problem:

  1. AI companies change their Terms of Service so they can use your data to train their models.
  2. People upload private information when using AI tools, unaware of said Terms of Service or that their data could be used illegally. 

Tech companies are running out of available data to train their models. Some have turned to customer data, regardless of whether or not they have the right to do so. Plus, much of the internet data used to train AI models is copyrighted. 

We’ve already seen countless companies like Meta, Adobe, and Microsoft change Terms of Service in ways that add ambiguity about how or if they use customer data to train their AI tools. For other companies like OpenAI, that has always been part of the deal

There is a gap between the lack of transparency in AI training and informed consent from those whose data are being used to train AI models. Plus, some AI companies want to train their models without compensating contributors, with some going as far as to use pirated data sets from unsuspecting authors

It is inevitable that lawsuits related to these practices will make data privacy and copyright issues more relevant moving forward. 

4. Algorithmic Bias

AI reflects the biases of its creators and the information its models train on. Whether it’s security surveillance tools, driverless cars, or employee recruitment platforms, the examples of algorithmic bias are numerous. The lack of diversity in AI development has led to serious real-life harm for people of color. 

The take-away for marketers: when we use AI tools, it is entirely possible that this algorithmic bias could extend to our work as well. 

5. Wage Theft & Unemployment

Technology remains the sector with the highest number of job cuts on the year at 47,436. Of the cuts across sectors, 800 lost jobs were blamed on AI, the highest number of layoffs citing the reason since May of 2023.

— Mary Whitfill Roeloffs, Almost 65,000 Job Cuts Were Announced In April—And AI Was Blamed For The Most Losses Ever, Forbes

Unfortunately, many companies will use AI to drive layoffs. Our industry already struggles with wage theft in the form of underpaid gig economy workers who don’t have access to healthcare, profit sharing, retirement planning, or other benefits of full-time employment. AI will exacerbate these issues even further.  

Marketers are especially vulnerable to this as AI tools increasingly replace copywriters, art and creative directors, and web designers and developers. This could lead to quality control issues and potential ethical challenges inside agencies as well. 

Finally, new AI search features will require SEOs to rethink how they approach their craft—and content marketing in general. To date, the updates have lacked transparency, which is confusing, leaving uncertainty in the search market. Plus, AI-generated search results suffer from the same misinformation and quality issues that plague other AI tools. Search Generative Experiences have even offered potentially dangerous health recommendations like drinking urine, among other things. 

Considering how many people around the world use search engines to answer their daily questions, the misinformation challenges described in point #1 above are relevant here as well. To craft a more sustainable SEO strategy, digital marketers must continue to stay on top of rapidly-changing trends in AI-powered search.

Illustration of a variety of people in a store.
Illustration of a variety of people in a store with errors circled.
We identified at least 16 potential problems with this illustration generated by ChatGPT, underscoring an urgent need for quality control in AI–powered marketing and communications.

Six Ethical and More Sustainable AI Practices

With all these issues at play, is ethical and more sustainable AI even possible? There is obvious tension between AI’s explosive growth and the ethics and sustainability challenges outlined above. However, the guiding principles below can help marketers incorporate more ethical and sustainable AI practices into their work. 

1. Human Oversight for AI Processes

In its current state, AI is nowhere near replacing humans. While AI works faster than we do, it cannot catch its own errors, making it an unreliable source for information.

To address this, marketing teams must oversee AI outputs in their campaign and content workflows. Most organizations have editorial processes to improve the quality of their marketing and communications. Oversight helps marketers identify and remedy gaps or misinformation. In other words, fact-checking, quality control, and strategic thinking become even more critical in an AI-enabled world. Prioritize these things when incorporating AI into marketing campaigns.

2. Define Responsible AI Criteria for Your Organization

AI is often billed as a time-saving tool. However, its unreliability can sometimes create more work than it saves. For example, does fact-checking answers to simple questions actually save you time? 

Ask yourself: is AI the most effective way to approach a task? Like all computing models, AI employs rules-based thinking. It does not have experiences. It cannot develop new ideas. Also, AI lacks emotional intelligence, empathy, and cannot make moral or ethical judgements. 

Still, AI can be useful for some common marketing tasks. Identify what those are for your organization and incorporate responsible practices to implement them.

For example, it might be challenging to align your AI strategy with a clear climate strategy, given the current lack of transparency around energy use and other sustainability data points from AI companies. However, if these issues remain priorities for your organization, you will eventually find more responsible partners. It will just take time, patience, and diligence (see point #6 below).

Similarly, if your organization has a clear Code of Ethics, consider starting there. If not, feel free to use ours as a baseline.

3. Focus on the Prompt

Prompts provide context for AI responses. Slight variations in your AI prompts often yield very different results. Sometimes, it takes multiple prompts to get the information or results you need.

For example, I asked ChatGPT about how to plant orchids. The answer was thorough and well thought out. However, it assumed I wanted to plant orchids inside.

Preparing the Pot and Medium

  1. Pot Selection: Use a pot with drainage holes. Clear plastic pots are popular as they allow you to monitor root health.
  2. Orchid Medium: Orchids do not grow in regular soil. Use a special orchid mix containing bark, perlite, and sphagnum moss.

But what happens if I want to plant orchids outside? I could clarify in a second prompt that I was talking about planting orchids outside and it would generate an entirely new set of instructions with that criteria. 

Choosing the Right Orchid and Location

  1. Select Suitable Orchids: Choose orchids that are suited for outdoor growing in your climate. Examples include Cymbidium, Dendrobium, and some species of Cattleya and Oncidium.
  2. Climate Check: Ensure your local climate can provide the necessary warmth and humidity. Orchids generally prefer temperatures above 50°F (10°C) and high humidity.
  3. Location: Find a spot with filtered sunlight, such as under a tree or a shade cloth. Avoid direct, harsh sunlight as it can burn the leaves.

ChatGPT has now created two responses to the same question. Minimizing the number of prompts can reduce the amount of resources an AI tool uses. Often, the easiest way to do this is to be very clear and detailed in your prompts.

Making a good prompt means understanding two basic concepts:

1. Context windows

Context windows are the maximum number of words a model can consider when it generates answers. In human terms, if I ask you what 2 + 2 is, it’s easy enough to do that math in your head, right? 

However, if I ask you what is 1237.4 * 53.3, you might struggle to do that math in your head because it heavily taxes what is known as working memory. Calculating an answer becomes harder because you have to remember what you did several steps ago, which impacts how you approach the next step, and so on.

Generative AI struggles with the same problem. Current AI models don’t have enough computational power to consider everything you’ve ever told them, everything they already know, and the prompt every time they generate an answer. 

As AI gets to the end of the context window it becomes less accurate. This is why machine learning has struggled with complex tasks with a lot of dependencies, like writing a novel, or even complicated code. When AI reaches the limitations of its context window, it can result in AI hallucinations and nonsensical answers.

Unfortunately, there’s no silver bullet to working within context windows. This is the main limitation of large language models right now. 

In other words, an AI tool’s ability to remember what you told it earlier will diminish as the exchange goes on. To get the most out of AI, keep tasks simple and straightforward.

2. Provide AI tools with context

Next, when we talk to other people, we use shortcuts in our speech and rely on context to fill in the gaps. If I’m standing outside next to a surfboard and ask a surf instructor ‘How do I surf?’, I expect them to understand that I’m asking how to surf on waves versus how to surf the internet. 

Similarly, as digital tools have become more sophisticated, we expect that they understand our context. In other words, when I type Chinese food into my search bar, I expect it to know that I want Chinese food near me, not 200 miles away.

However, many AI tools do not understand context clues automatically. Just as we’ve learned to refine Google search queries with context keywords for years, we should similarly aim to provide AI tools with the necessary context to answer our questions.

4. Respect Stakeholder Data

We are witnessing an AI arms race. The most valuable asset in this race to train better and more complex models is data. There could be significant risks involved with sharing stakeholder data with AI companies. To address this challenge:

  1. Read all Terms of Service carefully before you use any AI tools. 
  2. Never share customer or other stakeholder data without explicit, informed consent
  3. Review current organizational privacy and data-sharing policies. Do they need to be updated to include AI-specific language? 

To enable more sustainable data strategies within organizations, enact data governance policies that provide clear guidance on what are considered acceptable and unacceptable uses of stakeholder data.

5. Develop an AI Governance Strategy

Everyone thinking about using AI should also be thinking about how they will adopt effective AI governance practices. Good AI governance answers important questions like:

  • What is our stance on the ethics and sustainability issues associated with AI?
  • What will our organization use AI for?
  • Which tools will we use?
  • What approach will yield the best results for our needs?  
  • Are we comfortable sharing stakeholder data with AI companies?
  • What legal and regulatory risks are involved with incorporating AI into our business or marketing practices?
  • What is our approach to quality assurance (QA)?
  • How will we train employees, vendors, and other stakeholders to build capacity and maintain ethical practices and consistency over time?

Responsible organizations should answer these questions and draft policies and practices to maintain good long-term AI governance.

6. Support Impactful AI Legislation

Finally, and perhaps most importantly, we must push for legislation and regulatory guidance that increases transparency and accountability for AI companies and keeps people and the planet out of harm’s way. 

Technological progress moves far faster than public policy. Conscientious companies and nonprofit organizations must prioritize responsible tech legislation to address AI’s unintended or otherwise problematic consequences. 

This AI regulation tracker can help you stay up to date on current legislative issues related to AI in the U.S. You can also check out the Responsible Tech Advocacy Toolkit below.

Moving Toward More Ethical and Sustainable AI

So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public. Yet broad confidentiality agreements block us from voicing our concerns, except to the very companies that may be failing to address these issues.

— Former AI Company Employees, A Right to Warn about Advanced Artificial Intelligence

AI is on a familiar path of explosive, unregulated growth. Unfortunately, this often comes with an indifference to ethics and sustainability. We’ve seen this in many other industries as well as in previous technical trends, like social media.

To craft more ethical and sustainable AI practices, we first must understand a growing array of problems that will help us learn how to use AI more responsibly. Then, as new tools and tactics take shape, we can adjust our practices accordingly. 

Think we missed an important point on more ethical and sustainable AI use? Please drop us a line. We would love to hear from you. 

Responsible Tech Advocacy Toolkit

Advocate for responsible tech policies that support stakeholders with this resource from the U.S.-Canada B Corp Marketers Network.

Get the Toolkit
Nicole Hunter is a project manager at Mightybytes. She is passionate about learning how information is spread on the internet, from search engines to social media, and using that knowledge to create marketing solutions that serve her clients.