AI changes the face of many industries, affecting people’s actions and quality of life. ChatGPT and tools based on its system showed a great potential of AI. But they also sparked discussions about their possible side effects. There is a lively polemic about AI ethics, and a key issue is AI’s ability to make ethical decisions. 

Although AI tools have been out there for a long time, today they have reached a critical line, and we can finally talk about a revolution. 

ChatGPT revolution and AI ethics 

ChatGPT became an overnight sensation all over the Internet, gaining over a million users in just a few days. The course started by his premiership is gaining momentum. 

“Ethical AI” means the creation of AI systems that are transparent, accountable and compliant with human values and rights 

The rivalry with its creator, OpenAI, is intensifying, and experts warn that this “arms race” can bring both benefits and threats. Even Google CEO Sundar Pichai has publicly admitted that this technology can be harmful if implemented incorrectly. He called for a global regulatory framework for AI similar to nuclear arms treaties.

There is no doubt that legislation is needed to ensure that AI is developed and deployed ethically. Especially since Forrester estimates that by 2025, almost 100% of organisations will use AI, and the market for artificial intelligence software will reach $37 billion in the same year. 

Ethical AI: OpenAI's ChatGPT is designed to conduct coherent and logical conversations on various topics based on the questions provided to it, in a manner similar to talking to people. These types of linguistic models are the result of statistical analysis of previously written texts. On their basis, they select the most likely outcomes for a given topic and context.

The intersection of AI and ethics in IT 

AI ethics

The AI ethics is a branch of technology ethics specific to artificially intelligent systems. AI ethics includes:

  • concern for the moral behaviour of the people who design, manufacture, use and treat artificially intelligent systems; 
  • the issue of possible singularity due to superintelligent AI. 

Robot ethics

The ethics of AI intersects with the so-called robot ethics. Robots are physical machines, while AI can only be software. Not all robots operate through AI systems, and not all AI systems are robots. Robot ethics includes:

  • the morality of how humans design, use, and treat robots; 
  • analyses how they are used to harm and benefit people; 
  • explores their impact on individual autonomy and social justice.
Ethical AI: Ethical challenges related to AI

There are several key issues to consider in AI development and deployment. These include legal liability, a threat to privacy, the danger of creating disinformation materials or possible business development based on a model bringing negative social consequences.

Key questions regarding AI ethics:

  • Lack of transparency of AI tools: humans do not always understand AI decisions. 
  • AI neutrality: decisions based on it are prone to inaccuracies, discriminatory outcomes, and built-in or inserted biases. 
  • Data Security: oversight of data collection and user privacy.
  • Justice and risk for human rights and civic values. 

Actions taken

Scientific and social organisations worldwide are taking initiatives to adopt socially beneficial AI and establish an ethical framework. The repeated values are transparency, honesty, non-violence, responsibility, privacy, charity, freedom, autonomy, trust, dignity, solidarity, and sustainable development. 

To put things in order, Oxford University researchers Luciano Floridi and Josh Cowls created an ethical framework for the principles of AI already defined by the four principles of bioethics: beneficence, non-violence, autonomy and justice. They added the principle of explainability to this set. Explainable AI also includes interpretability.

Ethical AI: Explainable AI also includes interpretability, with explainability referring to summarising the behaviour of the neural network and building user trust, while interpretability is defined as understanding what the model did or could do.

The researchers recognised that the full achievement of ethical AI requires combining the above principles with algorithmic ethics.

Algorithmic ethics what is it? 

It refers to the moral guidelines and ideals contained in the creation of AI systems. It serves to guide the development of these systems to meet standards of fairness, privacy and accountability

Algorithmic ethics include, among others: 

  • eliminating errors in data used to train algorithms through careful selection of data sources;
  • augmenting data by adding or altering it to produce a more diverse set of data; 
  • engaging a diverse group of stakeholders at the system design and development stage: ethicists, scientists, community representatives, and, above all, AI developers. 

Researchers, developers and AI engineers must constantly review and analyse algorithms to identify and correct errors that may occur over time.

1. Bias and prejudice 

Ethical AI: Potential ethical risks related to AI

One of the main risks of AI is bias, which can lead to unfair and discriminatory results. Biassed AI algorithms can “make decisions” unjust to specific groups of people, reinforce social inequalities, and perpetuate prejudice. 

Prejudices originate from stereotypical representations deeply rooted in societies. They takes many forms – racial, socio-economic, and gender, resulting in unfair outcomes for particular social groups. And search engine technology is not neutral – it prioritises the results with the most clicks depending on the user’s preferences and their location.

The reliability of systems depends on the data they are trained on – biassed data can lead to biased algorithms 

At the same time, it is difficult for Internet users to determine whether the data and, consequently, their choices are fair and trustworthy, which can potentially lead to undetected bias in the systems.

This way, prejudices and stereotypes of the real world are perpetuated on the Internet. Without extensive testing and diverse teams, unconscious biases can enter machine learning models and consolidate biased AI models.

Ethcial AI – Examples of detected bias of AI systems: Errors in gender detection: Facial recognition algorithms developed by Microsoft, IBM, and Face++ had bugs in their gender detection. These systems were shown to detect the sex of white men more accurately than the sex of darker-skinned men. Errors in voice recognition: A 2020 study reviewing voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they had a higher error rate when transcribing black voices than white voices. Favoritism of one gender: Amazon's AI hiring and recruiting algorithm favoured male applicants over female applicants. It was because the system was trained on data collected over ten years, mostly from male applicants. Amazon has stopped using it.

Experts warn that algorithmic bias is already present in many industries. The topic of bias in machine learning will become significant as the technology spreads into critical areas such as medicine and law.

2. Data privacy and security

Undoubtedly, AI can potentially develop societies, but it also threatens their privacy and security rights. To promote AI as trustworthy, its creators must prioritise human rights and guarantee that the design and implementation process is fair and accountable.

To protect privacy, setting clear standards should become part of AI governance

The responsible development of AI refers to the ethics ofdata, which is the fuel that drives artificial intelligence.

Data collection and use must be lawful so that individuals are aware of the control over their data and respect for privacy throughout the development and deployment of AI.

Protests against the use of data

The data used to train individual AI models – their type and origin – is essential here. These models are known to have been trained on a vast amount of texts, books, and artwork. Creative industries are outraged by the practice of training AI models on copyrighted material without artists’ consent.

Using GenAI tools, you can modify them, select elements or combine two or more images into one, inventing something seemingly new. Many artists protest against using their art to train AI models, for example Stable Diffusion model. The current lawsuit concerns the creation of images by AI generators, and in a moment, the case may involve musicians and the entire creative industry.

People’s concerns also include how AI is used to spy on and spread fake content online with tools that work without a human controller.

AI is becoming an increasingly integral part of facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are susceptible to biases and errors introduced by their human creators. [see: Bias and prejudice].

3. Transparency and understandability 

The creators of AI are, in a way, representatives of the future humanity. They have an ethical obligation to be transparent in their work. Building trust and promoting AI transparency can be achieved by creating detailed documentation on how systems work.

Setting clear rules is critical to ensure that AI is used ethically and responsibly 

OpenAI, according to its declarations, is a non-profit organisation dedicated to the research and development of open-source AI beneficial to humanity. Unfortunately, making the code open does not make it understandable, which means that the AI code itself is not transparent. At the same time, there is a concern that making the full performance of AI systems available to some organisations may do more harm than good

In turn Microsoft expressed concern about allowing universal access to its facial recognition software, e.g. by the police. At the same time, this company, for cost-cutting reasons – as reported by the media – fired the entire AI ethics team. And it’s not the only company that decided to save on this topic. 

Many global organisations and ombudsmen recommend government regulation as a means of ensuring transparency

  • UNESCO, in 2021, issued the Recommendation on the ethics of artificial intelligence. It is the first global standard-setting instrument in this field. It included, among others, an important issue of gender bias and stereotypical representations of women in the digital world. 
  • The Algorithmic Accountability Act from 2022 requires US companies to assess the impact of their AI systems and act to reduce their negative effects. It highlighted factors such as bias, discrimination, privacy. 
  • The IEEE (world’s largest technical organisation developing technology for humanity’s benefit) strives to identify transparency scales for various users. 

4. Replacing human labour 

AI applications can potentially affect many areas of life and the labour sector. Presumably, the work will be divided into tasks that:

  • can be automated; 
  • cannot be automated; 
  • should not be automated. 

The ethical implications of potential replacement of human jobs by AI will lead to significant changes in the labour market

The division will result in the fact that certain professions and specialised activities may disappear. What about the creatives? Various studies show that AI text generators are capable of producing grammatically correct content. However, human creators have a deeper understanding of real-world context and emotions. Humans still create better content than AI because AI lacks logical reasoning. 

Ethical AI: As early as 1973, Joseph Weizenbaum argued that AI technology should not replace humans in jobs requiring respect and care. He claimed that people in certain positions are expected to have deeper feelings and empathy. The argument is still valid. 

Laws and standards in AI ethics 

As far back as the 1950s, Isaac Asimov pondered the ethics of machines. He tested the limits of the laws he created to see where they would cause paradoxical or unexpected behaviour. His work suggests that no set of fixed rules can adequately predict all possible circumstances

Currently, the UN, OECD, EU and many countries are working on strategies to regulate AI and find the appropriate legal framework. 

European Commission and AI 

Ethical AI: Laws and standards in AI ethics 

In June 2021, the EC High-Level Expert Group on AI (AI HLEG) published “Policy and Investment Recommendations for Trustworthy Artificial Intelligence”. It is the second stage of their work, following the publication of the “Ethical Guidelines for Trustworthy Artificial Intelligence” in April 2019. 

The June AI HLEG recommendations cover 4 main topics: people and the general public, research and academia, and the private and public sectors.

There should be a set moral framework that AI cannot change 

The European Commission says that “the HLEG recommendations reflect both the opportunities that AI technologies can drive economic growth, prosperity and innovation, as well as the potential associated risks”. It states that the EU intends to lead the way in shaping policies that regulate AI in the international arena. 

GDPR and AI 

The EU General Data Protection Regulation requires companies to implement safeguards to ensure the transparency and equity of AI algorithms. GDPR is applicable to all organisations in the EU. It requires that companies secure processing of personal data and enable individuals to access and control their data.

The role of IT companies in providing ethical AI 

Artificial intelligence itself cannot be held responsible, but the people who create it – yes 

IT companies and developers play a crucial role in creating AI systems. Thus, they should ensure embedding the ethical principles in them.

That includes providing the security and integrity of these systems by creating algorithms that take into account many perspectives. This reduces the risk of perpetuating prejudices that may exist among its creators.

Actions to reduce the risk of bias:

  • engaging diverse teams in the development and testing of AI algorithms, ranging from ethnicity, gender, socio-economic status and education to knowledge, values, and beliefs; 
  • regular audits and reviews of the AI system to ensure the diversity and representativeness of data sets; 
  • the use of reliability measures – they help determine how the algorithm performs for different ethnic or gender groups and highlight discrepancies in the results; 
  • integrating ethical principles and codes of conduct with AI systems. 

Summary 

Unethical AI has the potential to harm individuals and society. Preventing this requires the involvement of people, including the IT industry, throughout the development and implementation of AI.

Appropriate regulations can ensure the development of ethical AI by setting standards of transparency, limiting discrimination and protecting data. Introducing guidelines, integrity metrics and ensuring team diversity can help AI developers build safe and accountable AI.