Contents

What is Ethical AI development? webp image

With a growing popularity of Artificial Intelligence tools (generally speaking), such as the hyper-popular ChatGPT (or, as Adam Kaczmarek summed it up - The Revolutionary Bullshit Parrot), Midjourney, Bard, etc., AI has basically become… mainstream. And although it is an issue known to scientists and studied for years, making it available to consumers for general use has spurred its development even more. It accelerated the demand for all kinds of AI solutions even more - and now you can see that most large companies have already boasted of introducing AI in their products/services. AI has evolved as an integral part of our increasingly digitized world, from powering voice assistants to enabling advanced medical diagnostics. However, as AI systems become so ubiquitous, concerns about their ethical implications have also emerged. I bet you've already heard voices claiming that the development of AI does not lead to anything other than the destruction of humanity. What a Black Mirror-esque scenario, right?

I will not discuss today where the development of AI will take us. Instead, let us focus on a slightly less catastrophic vision but a subject that is still critical and potentially problematic.

What is Ethical AI?

Although the breakthrough and value of Artificial Intelligence are visible, many systems based on it raise concerns about their design and use. Well, it was quite obvious that AI will also be a tool in the hands of those who do not have good intentions. Starting from the propagation of stereotypes or the disclosure of sensitive data to the sowing of disinformation and its use for propaganda and manipulation purposes, the creation of socio-economic inequalities, and even the creation of powerful autonomous weapons using AI. Not to mention the consequences of the rapid development of AI that are hard to predict, including the potential development of Artificial Intelligence capable of taking over the world and exterminating humanity.

dalek-exterminate

Yeeeees, I know that the Daleks were not a form of Artificial Intelligence, but when anybody mentions extermination, it always reminds me of them ;)

Due to these and similar issues, it leads to pretty heated debates about preventing such situations. Eventually, it very often comes down not only to the construction of the AI tools themselves but to the questionable quality of the data on which they learn. There is more and more data from which companies and various organizations draw valuable insights as the need for automation and data-driven decision-making is growing. And this, of course, is just the beginning of AI use cases. But with the increasing usage of AI solutions, the unforeseen consequences of wrong assumptions, whether from the data itself or the tool's configuration, have led to the creation of ethical AI guidelines.

Ethical AI concerns the development, deployment, and usage of Artificial Intelligence systems that are aligned with ethical principles and values and promote the well-being of individuals and society as a whole. Ethical AI systems are designed to be transparent, accountable, and equitable, prioritizing protecting human rights and dignity. It’s guided by a set of merits and rules, which include (but are not limited to):

  • Fairness: ensuring that AI systems are designed and deployed to treat all individuals and groups reasonably and not discriminate against anyone. However, fairness in AI is a complex and contextual concept, and there is no just one universal definition that fits all AI applications. Achieving complete fairness is challenging, but the aim is to detect and mitigate fairness-related harms as much as possible.
  • Transparency and explainability: developing AI systems transparently and in an explainable way, so users can understand what data is used, how decisions are being made, and what influence it has on them.
  • Accountability: ensuring that there is clear accountability for the decisions made by AI systems, which means that creators and any other people responsible are easy to identify and that there are solutions that prevent and address any damage or negative impacts that may arise.
  • Privacy: AI systems should be designed to assure individuals' privacy and protect personal data and information throughout systems’ life cycle.
  • Safety and reliability: ensuring safety both in terms of performing reliably and as intended, and that the AI system does not cause any risks to individuals or even society.
  • Contestability: due to their potentially significant impact on the user, a selected group, or even the whole society, humans should be able to challenge questionable results.

Promoting ethical AI thus contributes to using this technology in a manner consistent with generally accepted values and serving the common good of society. After all, AI is supposed to serve us, help us, facilitate tasks, perform those that would be too time-consuming for us, and so on.

One of the key aspects of ethical AI is ensuring inclusiveness in AI datasets, which involves ensuring that the data used to train and develop AI systems is diverse and representative of different populations. Inclusiveness in AI datasets is particularly important for ensuring that AI systems do not perpetuate or exacerbate existing social inequalities or biases.

Responsible AI

These concepts are very close. As a rule, however, ethics mainly refers to the good of the individual, groups, and society. Responsible AI includes implementing ethical principles in AI in practice, extending the issue of Ethical AI. Responsible AI aims to ensure that AI is developed and used in a way that considers the potential impacts on various stakeholders, such as users, employees, and the broader community.

Hence, Ethical AI is an essential component of responsible AI. Still, responsible AI goes beyond ethical considerations to include a more comprehensive set of principles and practices to ensure that AI is developed and used responsibly and sustainably.

dog or cat

source: Reddit

Best Practices for Developing Ethical AI

In terms of law, the fate of AI development has yet to be clearly decided; it is a relatively new technological trend. However, steps are already being taken in this direction, for example, by the Parliament of the European Union, which is working on the AI Act, defining, among others, obligations of suppliers and users depending on the level of risk that artificial intelligence may generate or prohibited practices. Moreover, UNESCO published its Recommendation on the Ethics of Artificial Intelligence in November 2021, and (in theory), they take into account the protection of human rights, the environment, minorities, health, and many other spheres. It is a set of recommendations, mainly in the policy field, intended to serve as a guide in the context of the conduct and development of AI. It focuses on promoting core values to ensure building systems for the benefit of society, the environment, and humanity overall.

Regarding more technical practices for engineers, we can find specific recommendations on creating ethical AI systems on the websites of companies like Microsoft, IBM, or Google. They give valuable tips and here some of them:

Human-centered design approach‍

It means that user experience is crucial for assessing the impact of AI systems. Hence, developers should design features with clear disclosures and user control. Also, it's vital to offer single or multiple options based on the context, anticipate adverse feedback, and iterate during testing. System creators should engage diverse users and incorporate feedback for better outcomes to ensure variety and unbiased results.

Multiple metrics to assess training and monitoring

For a comprehensive understanding of tradeoffs and experiences, it is recommended to use multiple metrics rather than relying on a single one. Various metrics, such as user surveys, overall system performance metrics, etc., can help. It is crucial to analyze the false positive and false negative rates in the different subgroups and ensure that the metrics chosen are consistent with the context and goals of the system.

Examining raw data

Analyzing raw data, where possible, helps to understand it better and ensure accuracy and fairness. The data transferred to the system has a direct impact on the results. It is, therefore, crucial to check for data errors, ensure representativeness, eliminate bias for training and unnecessary functions.

Testing

Following software engineering best practices for testing and quality assurance applies to AI systems as well. It includes conducting thorough unit tests, proactively monitoring data and concept drifts, using a reliable test dataset, incorporating iterative user testing, implementing quality checks within the system, and performing integration tests to understand system interactions and potential feedback loops.

Collecting and handling data responsibly

First, AI engineers have to determine whether an ML model can be trained without sensitive data or by accessing publicly available datasets. If it's necessary to use sensitive training data, its usage should be minimized and handled with the utmost care, adhering to legal requirements and standards. It's highly advised to follow best practices such as employing encryption for data in transit and at rest and aligning with privacy principles (of course, Google recommends its own).

Safeguarding the privacy of ML models

ML models' has the potential to reveal training data. It may happen through internal parameters or some visible behaviors of the model. To assess whether unintentional memorization or exposure of sensitive data occurs, experts utilize dedicated tests based and quantitative research checking how machine learning models reveal information about individual data records on which they were trained. Other good practices include optimizing models by exploring data minimization approaches, mathematically guaranteed techniques, and procedures taken from cryptographic software development.

Identifying potential threats to the system

As you've already noticed, adequate planning of an AI system architecture includes security. In order to create it well, you also need to think about the issues of misuse of the system, the exploitation of vulnerabilities (and above all, preventing them), and the consequences of such behavior. In assessing potential threats, their probability, and the scale of damage, it may be helpful to build a risk model that verifies assets and analyzes all exposures and hazards.

Final thoughts

The benefits of Artificial Intelligence are huge, but the risk of using it for threatening purposes is almost as high. Hence the growing interest and ongoing work on building specific frameworks to be used by those who develop and apply such tools.

Blog Comments powered by Disqus.