Introduction to AI Ethics

by

As software teams, we are starting to implement and integrate AI into more of the projects we complete. From chatbots to virtual assistants, AI is becoming an integral part of the web and mobile user experience. As a result, it’s important that we understand not only the capabilities of AI, but also the legal, philosophical, and political ramifications of its use. 

In this blog post, we will explore the philosophy, policy, and governance defining current thinking about ethics in AI.  

AI Ethics in Practice

Though ethical questions surrounding AI are complex and continually evolving, the concepts and practices can generally be placed into six categories, including: 

  1. Fairness
  2. Explainability
  3. Security
  4. Transparency
  5. Privacy
  6. Responsibility

There are different interpretations of what each of these means in practice, and there is a lot of overlap between the concepts. They are also not mutually exclusive. For example, an app that uses AI can be fair without being transparent. 

Each of these concepts is included in the real-life examples below. 

Fairness: Cross-Validation of Machine Learning Model

We’ve all heard the phrase “garbage in, garbage out.” This also applies to AI. If AI is being trained on inaccurate, unfair, or biased information, it will produce similar outputs. One way that bias can be mitigated is with cross-validation. 

Cross-validation is one way to measure the accuracy and performance of a machine-learning model. Within cross-validation, a data set is broken into multiple training data sets and a testing data set. Each data set is then used for both training and testing in a series of tests. The testing provides a higher-quality output based on having been trained/tested on all of the data. 

Explainability: Defending the Output and Limitations of AI

Despite the documented limitations of AI’s datasets and algorithms, many people take the information AI provides as unequivocal truth. The problem with this is that AI is not without its limitations. When users are unable to explain where an AI’s response comes from or how an output is decided, they are unable to guarantee that the information is without bias or inaccuracies. 

For example, a study completed by Tulane University in January 2024 found that while AI-assisted sentencing within the judicial system helped to decrease gender bias and overall jail time, racial bias within sentencing still continued. Because the data was not explainable, the AI’s inherent bias was able to persist. 

Because of these potential problems, many companies are using the “Human in the Loop” approach. With this approach, an AI model’s output must be checked by a human before it is used. This approach further trains the AI while also keeping a human involved to minimize errors.

The need for explainable data has promoted the creation of the phrase “Explainable Artificial Intelligence” or XAI. XAI prioritizes a clear and accurate understanding of an AI output’s source dataset and the process by which a model gathers information so that users can understand and trust the results provided. 

Transparency: Defending the Data Collection and Usage of AI

For AI to be transparent, it’s important that the data is explainable and that users have a clear understanding of how their data is being collected and used. 

For example, Grio recently released the GRŌ app on iOS, which helps you create a customized gardening plan based on your location and gardening preferences. The app is powered by ChatGPT, and as a result, we provided information regarding how the AI works, which data points it is collecting, how that data is being handled, and the limitations of the recommendations it provides. 

However, when companies are not transparent with their information, it can lead to distrust between them and their users. For example, a Grio employee recently discovered that though an app they were using stated that it collected his driving speed, acceleration, and location, the fine print within the app stipulated numerous additional data points that were being collected without his knowledge. He was angered by the dishonesty and no longer felt that he could trust the company that managed the app. If companies are to maintain the trust and respect they have with their clients, it’s important for them to be transparent about how they collect, use, and store their clients’ data. 

Security and Privacy: The Trust Layer

Security and privacy are at the forefront of the AI discussion. Machine learning models have the potential to learn from every prompt they receive. This means that if personally identifiable information (PII) is used in a prompt, it could potentially be re-used in an AI’s future outputs. Many companies are, therefore, searching for ways to allow their staff to use AI to improve efficiency without compromising the privacy and security of their staff, clients, and stakeholders. 

One such example is Salesforce’s AI “Trust Layer.”

Salesforce gives the following example – imagine you’re a business owner using Salesforce for your CRM and customer management. A member of your sales team logs in and writes a prompt using the Salesforce AI integration to figure out how to sell a client a new product. In order to target the specific client, they include the client’s PII in the prompt. 

The Salesforce Trust Layer was designed to proactively identify potential PII data and extract it before any other inputs are fed back into the model. If the Trust Layer is successful, it should keep PII and sensitive data from being learned, thus keeping it out of the future model outputs. These types of solutions will be essential as AI is used in more and more secure situations.

Responsibility: Continuous Testing and Improvement

The individuals creating these models and datasets have a responsibility to continue improving their model and the data they use to train it. However, this can be difficult to enforce when there is no broader government or policy setting a universal standard. For example, when ImageNet was created, there was no policy that stipulated who would catalog the images and how the cataloguers were chosen. 

In 2019, for example, one of the largest datasets, ImageNet, discovered that approximately 60,000 of its images were categorized with inherent racism. These images were used to train AI systems for over ten years, resulting in potential inherent racism throughout multiple models. 

Therefore, it is the responsibility of not only the creators of these models but also governing bodies to find ways to reduce bias and discrimination within them. 

Retraining with AI

To ensure self-learning models stay up to date, they must be constantly retrained with new and relevant data. 

The need for standardization has led to the rise of many researchers and academics attempting to find a viable solution. One of the most famous of these is the GLUE benchmark. In 2018, the General Language Understanding Evaluation (GLUE) was formed. It was a group of academics and researchers who said that the ways in which we were testing AI just weren’t good enough. GLUE developed a series of public tests and a dataset to create a standardized framework for evaluating and comparing different language models, allowing researchers and developers to assess the progress in language-understanding algorithms. 

The GLUE framework was made available to the public with one caveat: If you used the framework, the results of your test would be publicly available. This encouraged accountability and transparency for the test, the models being tested, and the corporations creating the models. 

Within about a year, GLUE found that the AI algorithms being used were already exceeding the capabilities of their test, so they created a second series of tests, called SuperGLUE

AI: Governance & Policy

Who’s responsible and accountable for AI? The short answer is “everyone.”

Already, we have seen examples of large entities whose irresponsibility has led to issues for consumers: 

  • One of the first examples was Microsoft’s chatbot, Tay, which used racist language.
  • New York City’s AI tool for small businesses provided businesses with unlawful information about what they could or could not do as part of their business. 

In response to these situations and others, many policy organizations have been created to help people understand best practices and how they should be applied to governance, law, auditing, and more. Today, hundreds, if not thousands of groups are trying to figure out how AI should be governed going forward. 

Because of the lack of standardization regarding governance, we are all responsible for the ethics of the AI we create and consume. Even software companies like Grio need to understand the complexity of AI ethics and be proactive about guiding our customers in the right direction when we discuss AI integration and implementation.

Current Governance

The European Union is currently spearheading the movement towards standardized governance and policy. The Artificial Intelligence Act, which was passed by the European Commission on March 13, 2024, is considered to be the world’s first comprehensive horizontal legal framework for AI. The Artificial Intelligence Act should be implemented throughout the EU in the next two years, and businesses operating with the EU will be required to follow its policies. 

The general consensus is that governance and policy for AI in the United States is still lagging. And, like many topics in the United States, there are varied opinions about how much involvement the U.S. government should have. Many technologists fear that involving the government in AI policy will stifle advancements. Policymakers and consumers are concerned about the lack of governance and want to make sure businesses and corporate entities are following ethical best practices. 

In the United States, there are currently no official federal laws or policies to guide the industry. A Blueprint for an AI Bill of Rights was released in October 2022, and an Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI was published in October 2023. At the state level, each state has different rules and regulations regarding AI. 

Summary

The ethical concepts and considerations surrounding AI are not new, but the context, application, and practices of how they are prescribed are.

As consumers, it’s important to do your research before using AI tools or systems. AI has numerous benefits for our society, but it is also important that we, as creators and consumers, remain informed and vigilant as governance and policy continue to improve. 

Schedule a free consultation with Grio to learn more about how you can safely and effectively use AI in your organization. 

Leave a Reply

Your email address will not be published. Required fields are marked