Job Titles

The Rise of the Chief AI Ethics Officer

By Sarah Hallam

Last updated: Apr 5, 2023

As more companies adopt AI into their workflow, the rise of an AI Ethics Officer has become a crucial role in the C-Suite. But leading experts and ethicists in the field say that just adding a title next to a name is not enough.

According to a recent survey by Deloitte, 43% of 2,737 executives said AI “will transform” their organizations in the next 1-3 years. Image Courtesy of Shutterstock.
According to a recent survey by Deloitte, 43% of 2,737 executives said AI “will transform” their organizations in the next 1-3 years. Image Courtesy of Shutterstock.

Artificial Intelligence (AI) has already ingratiated itself in the financial and tech industries, and the adoption of data-driven models is still growing. A KPMG report published last month found that 84% of financial services companies say “AI is at least moderately to fully functional in their organization,” a 37% increase from last year. 83% of tech companies are saying the same thing, a 20% increase from last year.

Hence the rise of an AI Ethics officer. An emerging role in the C-suite dedicated to making sure the growing use of AI models is developed with an ethical and moral framework in mind. Yet, despite the growing practice of adopting AI, too many companies mistakenly believe they need to fill this crucial new role with just one person.

Cansu Canca is the Founder and Director of AI Ethics Lab, a research and consulting firm that works with organizations to make sure its AI models are up to a certain ethical standard. She says that high-profile misuse of AI in the past few years, like the Cambridge Analytica data breach, has increased public scrutiny and mistrust over machine-learning tools.

“Any model that’s going to be solving things can also solve things incorrectly,” Canca says. “Ethics matters [in AI] because we personally cannot protect ourselves from these systems that are supposed to be ethical. You can’t fend for yourself in the digital world. You’re subject to these different AI structures.”

Many tech companies have started to create and hire Ethics Officers amid this growing public pressure and precaution. In 2018, Microsoft established its first full-time position in AI and policy ethics after a few years of early research on the complexities of ethics and AI, and in 2019, Salesforce very visibly hired Paula Goldman to be its first-ever Chief Ethics and Humane Use Officer.

But Canca points out that the mere creation of these positions isn’t enough alone to tackle ethical issues in AI head-on — it needs to be a team sport.

To navigate the complex world of data analysis and AI, companies can benefit from having a dedicated data analyst on their team. A skilled data analyst can not only assist in implementing ethical frameworks for AI, but also provide valuable insights into business operations. Learn more about the importance of data analysts at The Org.

“I think the most important thing to ask is, who is working for these Chief Ethics Officers?” Canca says. “What is the real definition of the job? How involved is the team? Are they in the questions that developers and designers have as they are working towards a solution? How do they decide what is the ethical solution for a given question?”

Will Griffin is the Chief Ethics Officer at Hypergiant, an AI-technology solutions company based in Austin, Texas. Griffin oversees the ethical vetting process, which involves nearly every team at the company. He says his job is two-fold: educating designers, engineers, and developers about Hypergiant’s ethical framework so that responsible AI becomes integrated into part of their workflow, and then, once a tech solution is created, overseeing a rigorous vetting process of the new model through an ethical lens.

Oftentimes, Griffin’s work involves asking his teammates some tough questions: Is there a positive intent use case for this product? If every company in the world deployed this product, what would the world look like? What groups could be inadvertently harmed?

“We don't wait on the client to give us their ethical vetting framework,” Griffin says. “They get to choose between a vetted set of solutions that we've already created. And it ensures that ethics is part of our workflow. That's the most important thing.”

Working through moral and ethical issues in the workplace is hardly a new concept. Many large companies, especially in the pharmaceutical and finance industries, have had institutional compliance boards for decades. They patrol the legal, moral, and ethical ramifications of their company’s actions. But for computer programmers and software developers, ethics and AI may not come as naturally. Griffin stresses the importance of having an ethics officer in this case in mind from the very beginning of developing a product with AI.

“Right now, almost every major company has a compliance department,” Griffin says. “I can assure you that the compliance department does not drive what happens in R&D, or design and development. Because it's almost an ex post facto tax. And it's done for laws that are already written. Once laws are already written, I can assure you that innovation in that industry or that product area has long passed. Innovation is happening, pre-law, pre-policy, pre-rules. And so that's the reason why you have the ethical reasoning in the process.”

But Griffin acknowledges that all of this would be for naught if his CEO and Board of Directors weren’t committed to having an ethical framework be an integral part of the company.

“They have to be on board and say, ‘We want to be an ethical company, we want to have all of our use cases using an ethical framework that aligns with our values and that aligns with accepted social values,’ ” Griffin says. “You can have a team of 100 people, and if you don't have the buy-in from the CEO, it’s worthless.”

Consider the case at Google, regarded as one of the pioneering workplaces focused on AI Ethics. In early April, Samy Bregio, a research manager on the Google AI team, resigned after the firings of his two colleagues earlier in the year. The initial dismissal of the two AI Ethicists at Google gained industry-wide attention and led thousands of Google employees and peers to sign a petition demanding more answers for their termination, which were widely seen as retaliatory for speaking up about the transparency of Google’s own ethics policy.

Cases such as these are a stark reminder of how adding “AI Ethics” to a title or an org chart isn’t always enough. But Griffin argues there’s a competitive advantage to have responsible ethical reasoning in a company’s workflow, mainly so that it doesn’t blow up in the company’s face if they recklessly deploy an AI product that produces a societal maelstrom.

“We don't know what the consequences of some of these deployments are going to be until they get implemented because they don't go through the ethical reasoning process similar to the one I described,” Griffin says. “If every city in the country, every country in the world, is deploying an AI product, what would the world look like? And what constituencies could be negatively impacted by that? You have to use your creativity and your imagination, the same that you use to design the technology, to think about its impact on constituencies around the world.”

--

The Org is a professional community where transparent companies can show off their team to the world. Join your company here to add yourself to the org chart!

The ORG helps
you hire great
candidates

Free to use – try today


Latest

UI Designer

Apr 17

·

5 min read

Technical Writer

Apr 17

·

5 min read

Network Engineer

Apr 17

·

5 min read

Brand Manager

Apr 17

·

5 min read