Generative AI
ChatGPT and Generative AI – why NOW is the time to understand them?
AI changes the face of many industries, affecting people's actions and quality of life. ChatGPT and tools based on its system showed a great potential of AI. But they also sparked discussions about their possible side effects. There is a lively polemic about AI ethics, and a key issue is AI's ability to make ethical decisions.
Although AI tools have been out there for a long time, today they have reached a critical line, and we can finally talk about a revolution.
ChatGPT became an overnight sensation all over the Internet, gaining over a million users in just a few days. The course started by his premiership is gaining momentum.
“Ethical AI” means the creation of AI systems that are transparent, accountable and compliant with human values and rights
The rivalry with its creator, OpenAI, is intensifying, and experts warn that this “arms race” can bring both benefits and threats. Even Google CEO Sundar Pichai has publicly admitted that this technology can be harmful if implemented incorrectly. He called for a global regulatory framework for AI similar to nuclear arms treaties.
There is no doubt that legislation is needed to ensure that AI is developed and deployed ethically. Especially since Forrester estimates that by 2025, almost 100% of organisations will use AI, and the market for artificial intelligence software will reach $37 billion in the same year.
The AI ethics is a branch of technology ethics specific to artificially intelligent systems. AI ethics includes:
The ethics of AI intersects with the so-called robot ethics. Robots are physical machines, while AI can only be software. Not all robots operate through AI systems, and not all AI systems are robots. Robot ethics includes:
There are several key issues to consider in AI development and deployment. These include legal liability, a threat to privacy, the danger of creating disinformation materials or possible business development based on a model bringing negative social consequences.
Scientific and social organisations worldwide are taking initiatives to adopt socially beneficial AI and establish an ethical framework. The repeated values are transparency, honesty, non-violence, responsibility, privacy, charity, freedom, autonomy, trust, dignity, solidarity, and sustainable development.
To put things in order, Oxford University researchers Luciano Floridi and Josh Cowls created an ethical framework for the principles of AI already defined by the four principles of bioethics: beneficence, non-violence, autonomy and justice. They added the principle of explainability to this set. Explainable AI also includes interpretability.
The researchers recognised that the full achievement of ethical AI requires combining the above principles with algorithmic ethics.
It refers to the moral guidelines and ideals contained in the creation of AI systems. It serves to guide the development of these systems to meet standards of fairness, privacy and accountability.
Researchers, developers and AI engineers must constantly review and analyse algorithms to identify and correct errors that may occur over time.
One of the main risks of AI is bias, which can lead to unfair and discriminatory results. Biassed AI algorithms can “make decisions” unjust to specific groups of people, reinforce social inequalities, and perpetuate prejudice.
Prejudices originate from stereotypical representations deeply rooted in societies. They takes many forms – racial, socio-economic, and gender, resulting in unfair outcomes for particular social groups. And search engine technology is not neutral – it prioritises the results with the most clicks depending on the user’s preferences and their location.
The reliability of systems depends on the data they are trained on – biassed data can lead to biased algorithms
At the same time, it is difficult for Internet users to determine whether the data and, consequently, their choices are fair and trustworthy, which can potentially lead to undetected bias in the systems.
This way, prejudices and stereotypes of the real world are perpetuated on the Internet. Without extensive testing and diverse teams, unconscious biases can enter machine learning models and consolidate biased AI models.
Experts warn that algorithmic bias is already present in many industries. The topic of bias in machine learning will become significant as the technology spreads into critical areas such as medicine and law.
Undoubtedly, AI can potentially develop societies, but it also threatens their privacy and security rights. To promote AI as trustworthy, its creators must prioritise human rights and guarantee that the design and implementation process is fair and accountable.
To protect privacy, setting clear standards should become part of AI governance
The responsible development of AI refers to the ethics ofdata, which is the fuel that drives artificial intelligence.
Data collection and use must be lawful so that individuals are aware of the control over their data and respect for privacy throughout the development and deployment of AI.
The data used to train individual AI models – their type and origin – is essential here. These models are known to have been trained on a vast amount of texts, books, and artwork. Creative industries are outraged by the practice of training AI models on copyrighted material without artists' consent.
Using GenAI tools, you can modify them, select elements or combine two or more images into one, inventing something seemingly new. Many artists protest against using their art to train AI models, for example Stable Diffusion model. The current lawsuit concerns the creation of images by AI generators, and in a moment, the case may involve musicians and the entire creative industry.
People's concerns also include how AI is used to spy on and spread fake content online with tools that work without a human controller.
AI is becoming an increasingly integral part of facial and voice recognition systems. Some of these systems have real business applications and directly impact people. These systems are susceptible to biases and errors introduced by their human creators. [see: Bias and prejudice].
The creators of AI are, in a way, representatives of the future humanity. They have an ethical obligation to be transparent in their work. Building trust and promoting AI transparency can be achieved by creating detailed documentation on how systems work.
Setting clear rules is critical to ensure that AI is used ethically and responsibly
OpenAI, according to its declarations, is a non-profit organisation dedicated to the research and development of open-source AI beneficial to humanity. Unfortunately, making the code open does not make it understandable, which means that the AI code itself is not transparent. At the same time, there is a concern that making the full performance of AI systems available to some organisations may do more harm than good.
In turn Microsoft expressed concern about allowing universal access to its facial recognition software, e.g. by the police. At the same time, this company, for cost-cutting reasons – as reported by the media – fired the entire AI ethics team. And it's not the only company that decided to save on this topic.
Many global organisations and ombudsmen recommend government regulation as a means of ensuring transparency.
AI applications can potentially affect many areas of life and the labour sector. Presumably, the work will be divided into tasks that:
The ethical implications of potential replacement of human jobs by AI will lead to significant changes in the labour market
The division will result in the fact that certain professions and specialised activities may disappear. What about the creatives? Various studies show that AI text generators are capable of producing grammatically correct content. However, human creators have a deeper understanding of real-world context and emotions. Humans still create better content than AI because AI lacks logical reasoning.
As far back as the 1950s, Isaac Asimov pondered the ethics of machines. He tested the limits of the laws he created to see where they would cause paradoxical or unexpected behaviour. His work suggests that no set of fixed rules can adequately predict all possible circumstances.
Currently, the UN, OECD, EU and many countries are working on strategies to regulate AI and find the appropriate legal framework.
In June 2021, the EC High-Level Expert Group on AI (AI HLEG) published “Policy and Investment Recommendations for Trustworthy Artificial Intelligence”. It is the second stage of their work, following the publication of the “Ethical Guidelines for Trustworthy Artificial Intelligence” in April 2019.
The June AI HLEG recommendations cover 4 main topics: people and the general public, research and academia, and the private and public sectors.
There should be a set moral framework that AI cannot change
The European Commission says that “the HLEG recommendations reflect both the opportunities that AI technologies can drive economic growth, prosperity and innovation, as well as the potential associated risks”. It states that the EU intends to lead the way in shaping policies that regulate AI in the international arena.
The EU General Data Protection Regulation requires companies to implement safeguards to ensure the transparency and equity of AI algorithms. GDPR is applicable to all organisations in the EU. It requires that companies secure processing of personal data and enable individuals to access and control their data.
Artificial intelligence itself cannot be held responsible, but the people who create it – yes
IT companies and developers play a crucial role in creating AI systems. Thus, they should ensure embedding the ethical principles in them.
That includes providing the security and integrity of these systems by creating algorithms that take into account many perspectives. This reduces the risk of perpetuating prejudices that may exist among its creators.
Unethical AI has the potential to harm individuals and society. Preventing this requires the involvement of people, including the IT industry, throughout the development and implementation of AI.
Appropriate regulations can ensure the development of ethical AI by setting standards of transparency, limiting discrimination and protecting data. Introducing guidelines, integrity metrics and ensuring team diversity can help AI developers build safe and accountable AI.
Explore the key AI trends transforming society and business, according to Infuture Institute insights. Grasp the changes and their implications.
Explore the differences between traditional AI and generative AI. Learn about the impact and practical applications of these artificial intelligence types.