Home

  –

The principles and practices of ethical AI: A framework for responsible innovation

The principles and practices of ethical AI: A framework for responsible innovation

Generative AI is an exciting area of technology that has the potential to be genuinely revolutionary. Like all revolutions, however, it also has the potential to be destructive, and so organisations need to ensure they developed an appropriate ethical framework for its use.

In the third piece in his five-part series of articles on the business applications of artificial intelligence, TEKenable group chief technology officer Peter Rose explains how, and why, organisations can use AI ethically.

Artificial intelligence (AI) offers the potential to transform how businesses and public sector organisations operate. The possible gains to be had, even from simple applications of AI, are enormous, enabling businesses to speed through bureaucracy, better understand clients and customers, and streamline a range of activities from contact centres to fighting fraud and the provision of healthcare. However, AI is also fraught with risk.

In short, while artificial intelligence is one of the most powerful and influential technologies of our time, it also poses significant ethical challenges and risks, such as bias, privacy and lack of accountability, potentially even challenging fundamental human dignity.

As a result, it is crucial to ensure that AI is developed and used in a responsible and ethical manner, one that respects human values and rights. In this article, I will outline the ethical challenges raised by the application of AI and machine learning (ML), as well as look at how they might be addressed.

Fundamentally, ethical use of AI demands the adoption of a human-centric approach, which is to say one that puts humans at the centre of the design, development, and deployment of AI systems. How that will work in practice remains open to debate, but there are already some markers on the road.

The EU AI Act

The first thing to understand about generative AI is that it exists within an already-existing legal framework. After all, AI is fundamentally based on the processing of data and, therefore, use of AI is subject to existing data protection laws such as the General Data Protection Regulation (GDPR), which already strictly governs how EU residents’ data can be used. Further regulation is coming, however.

Notably, the EU has approved the draft text of legislation to enact the world’s first comprehensive law on the use and abuse of artificial intelligence (AI). The EU Artificial Intelligence Act is designed to protect EU citizens from abuse by taking a “risk-based approach” to AI, explicitly prohibiting systems designed to engage in controversial potential practices such as social scoring and subliminal or other manipulative techniques.

For more information on the EU AI Act, I invite you to read my previous blog post “How businesses can prepare for EU AI regulation”, as well as download our recent white paper on AI in financial services.

Other countries are also legislating: the British government, for example, is currently engaged in consultation on its AI regulation framework, while the White House has proposed an “AI Bill of Rights” for the United States – the very name Bill of Rights leaving little doubt as to the aspirations behind it.

While these, and other, frameworks are all likely to differ significantly in their details, they will all, to a greater or lesser degree, seek to protect citizens’ rights, including the right to privacy and the right to be free from discrimination, as well as specific provisions such as authors’ moral rights. As a result, however these frameworks pan out, it is certain that they will restrict how and when AI can be used by organisations.

More than compliance

Complying with relevant legislation is only the first step, however. Use of AI creates a plethora of other potential risks, including risk to organisational reputations (for example, in the case of a data breach, mis-use of data, or creation of an inappropriate response) to wider social risks including exclusion, bias and failure in the planning or delivery of public services.

Ethical use of AI demands organisations recognise and address the full range of risks it can raise including to privacy, fairness, agency and dignity. It also requires that AIs are used transparently and offer explainability in decision-making. In other words, accountability.

In a 2022 working paper, the Organisation for Economic Co-operation and Development (OECD) expressed concern about a range of issues including expanded workplace monitoring, the multiplication and systematisation of human biases, reduction of workers’ autonomy and agency, and the lack of transparency inherent in the use of proprietary source code or algorithms in decision-making tools.

These concerns, all related to employment and the workplace, are only one face of the ethical concerns around AI. Indeed, while many can be expanded out to other, non-workplace scenarios, other biases need to be considered, such as how AI interpretations of data may have an effect on policy-making decisions. For example, provision of higher education calculated according to demographic statistics and historical trends could easily lead to underprovision of places in areas of greater economic hardship.

Healthcare, as a key area for AI adoption, whether in diagnosis, patient management or elsewhere, is similarly fraught. 

AI has already driven significant change in healthcare, including in imaging and electronic medical records (EMR), diagnosis and treatment, drug discovery and more. However, its use raised concerns about the quality, safety, and reliability of AI systems.

Healthcare decisions can have life-or-death consequences for patients, and therefore require a high level of accuracy, transparency, and accountability. Moreover, health data is highly sensitive and personal, and therefore requires a high level of protection. Therefore, healthcare providers and regulators need to ensure that AI systems are designed and used in a way that respects the rights and interests of patients, as well as the professional obligations of healthcare practitioners.

Noting this, research published by the Brookings Institution, a US-based think tank, suggested that solutions included government provision of infrastructural resources for data, increased oversight by health systems and hospitals as well as professional organisations, and better medical education “to prepare providers to evaluate and interpret the AI systems they will encounter”.

Finance, too, and banking and insurance in particular, throw up potentially thorny problems. Here, there is serious concern that human biases could be codified by loose use of AI leading to outright discrimination, in effect creating a kind of tech-driven ‘redlining’ that could result in various groups being denied access to financial services based on characteristics such as ethnicity, sex and gender, or sexual orientation. The serious moral consequences of this scarcely need to be spelled out, but were it to happen it would also have a disastrous effect on an organisation’s reputation.

The same is true in recruitment, which is otherwise an ideal application for AI as the ability of AIs to consume, summarise and screen job applications offers the promise of reducing a significant bureaucratic burden.

None of this has gone unnoticed. As far back as 2018, a Council of Europe (CoE) study on discrimination and bias expressly highlighted the “risks that artificial intelligence (AI) and automated decision-making systems pose to the principles of equality and non-discrimination in employment, the provision of goods or services in both the public and private sectors, public security policies or even in the fight against fraud”.

This is precisely why a human-centric approach is needed to AI, one that recognises that the data on which AIs are trained is created by, and is proper to, humans, as well as one that maintains human oversight before decisions are made. After all, AIs are only as good as the data they are trained on, and even then their output requires critical, and therefore human, analysis.

Accuracy, or lack of it, also creates ethical risks. We are all aware of the so-called ‘hallucination’ problem with generative AIs. Hallucinations, which are when AIs state untruths as truths, can be controlled for in prompt engineering, but it is essential to understand not only that they can occur, but also why. 

Indeed, Sam Altman, the chief executive of OpenAI, recently told attendees at Dreamforce, the annual conference organised by customer relationship management (CRM) software company Salesforce, that hallucination was a side-effect of how generative AIs work. 

“One of the sort of non-obvious things is that a lot of value from these systems is heavily related to the fact that they do hallucinate. If you want to look something up in a database, we already have good stuff for that,” he said.

Put simply, if AIs were not able to make things up then they would not be AIs.

None of this means that AIs are inherently unethical or unreliable. On the contrary, AIs can be powerful tools for enhancing human capabilities and well-being. However, it does demand that AIs need to be developed and used with care and caution. It also means that AIs need to be aligned with human values and norms, which, in turn, requires a clear ethical framework that guides the design, development, and deployment of AI systems, as well as the evaluation, monitoring, and auditing of their impacts.

Such a framework should be based on universal principles, such as respect for human dignity, autonomy, fairness, privacy and, of particular importance for public sector applications, democracy. It should also be informed by best practices, such as stakeholder engagement, human oversight, transparency, explainability, and accountability.

In short, AIs pose ethical risks because they are a creation of human beings and acting on information created by and about human beings. As a result, the only approach to AI that can deal with the potential for ethical risk is one that places human beings, not machines, at its heart.

Did you find your read useful? Stay up to Date with our Insights.

    Be our Next Succesful Study

    Further Reading

    Be our Next Successful Study

    Get in Touch with TEKenable

    Message John Bosworth Directly!

    Message Ken Byrne Directly!

    Get in Touch with TEKenable