Members of the House of Lords have called for an artificial intelligence code of conduct in the UK.

The House of Lords Select Committee on Artificial Intelligence, in a report titled AI in the UK: Ready, Willing and Able?, argued that the UK can lead the world in AI, as long as it puts ethics at the centre of its plans.

The Committee recommended five principles guiding how researchers and businesses develop artificially intelligent systems in the UK.

The Lords’ recommendations are:

  1. Artificial intelligence should be developed for the common good and benefit of humanity.
  2. Artificial intelligence should operate on principles of intelligibility and fairness.
  3. Artificial intelligence should not be used to diminish the data rights or privacy of individuals, families or communities.
  4. All citizens should have the right to be educated to enable them to flourish mentally, emotionally and economically alongside artificial intelligence.
  5. The autonomous power to hurt, destroy or deceive human beings should never be vested in artificial intelligence.

As part of its report, the Lords have called for these principles to be formulated into a cross-sector AI code to be adopted internationally as well as in the UK.

Commenting on the report, the Committee’s chairman, Lord Clement-Jones, said: “The UK has a unique opportunity to shape AI positively for the public’s benefit and to lead the international community in AI’s ethical development, rather than passively accept its consequences.”

Lord Clement-Jones’ comments have similarities to those made by the French president, Emmanuel Macron, who, in a recent interview with Wired, commented on his desire to be an active participant in the AI revolution; “I want to be part of it. Otherwise I will just be subjected to this disruption without creating jobs in this country.”

Lord Clement-Jones continued: “The UK contains leading AI companies, a dynamic academic research culture, and a vigorous start-up ecosystem as well as a host of legal, ethical, financial and linguistic strengths. We should make the most of this environment, but it is essential that ethics take centre stage in AI’s development and use.

“AI is not without its risks and the adoption of the principles proposed by the Committee will help to mitigate these. An ethical approach ensures the public trusts this technology and sees the benefits of using it. It will also prepare them to challenge its misuse.”


READ MORE: Can we have ethical artificial intelligence?


Experts warned, however, that the Lords need to be careful with their wording. The Society for the Study of Artificial Intelligence and the Simulation of Behaviour has come out in support of the ‘key messages’, but also commented: ‘Some phrases in the principles published as #147 in the report ought to be modified to acknowledge that AI technologies are tools developed by humans.

‘We don’t ‘work alongside AI’, we use AI to achieve certain goals or outcomes, just as with any other human-made tool. Granting AI an independent human-like existence, even through casual use of language, sets us on a dangerous course towards machines becoming moral patients; things to which we owe some moral duty.’

The ethical implications of AI are varied across the world. China, in particular, has demonstrated some worrying technologies, including extremely advanced facial recognition.