Last week, Google announced a new set of principles for the use of AI. Spearheaded by its CEO, Sundar Pichai, the principles look to ensure that Google, one of the most influential and powerful organisations in the world, doesn’t allow super-advanced artificial intelligence technology to be used for nefarious purposes.

It was high time Google made such an announcement. It had faced internal strife over its involvement with the military and the use of AI to create weapons. By its own admission, the company has a ‘deep responsibility’ to get AI right.

That’s why it announced seven ‘concrete standards’ that will guide its research and product development in the area.

You can read the full list of principles here, but as a summary, they are (in Google’s order):

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles.

Noble aims indeed. But what does an expert in AI ethics think of them? We spoke to Bertie Muller, chair of the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB).

Google’s AI principles – the expert view

Is it a good thing that Google has done this?

It is commendable that Google has decided to publicise their principles of working with AI. While pointing out the “potential to improve our lives” and warning that the power of AI will impact society, the principles do not explicitly point out any risks of AI. Google realises that the principles as stated in this initial blog post will have to be adapted to the rapidly changing technological and societal environment.

Are the principles fit for purpose?

There is significant overlap with other sets of principles recently published (e.g., by the Lords Select Committee on AI and the European Commission) as well as the well-established EPSRC Principles of Robotics that can be viewed as a root of most current discussions on AI ethics.

A socially beneficial AI or AI for good stands next to principles addressing fairness, safety, accountability, and privacy. These principles need to be applied at the forefront of scientific developments, stated as Principle 6 – upholding high standards of scientific excellence. This principle can be viewed as a meta-principle to guide all others.

Likewise, the 7th Principle plays a different role in prescribing all uses of AI in actual products to adhere to all of the previous principles. This should be achieved by a human evaluation of purpose, nature of provision, scale, and Google’s involvement.

However, the principles are sometimes a bit vague, e.g., when leaving unspecified what is meant by ‘appropriate’ with respect to accountability issues – in particular – opportunities for feedback, and human direction and control.

Are there any loopholes or weaknesses?

Google states what kind of applications it will not pursue, but again there is some vagueness: What is the overall harm? When do benefits “substantially outweigh” risks? When does a weapon not have the principle purpose to cause injury to people?

Google also stated that it will not develop “AI for use in weapons”, but can that really be avoided? Modern weapons incorporate many technologies not expressly developed for use in weapons and AI is no different.

Final thoughts?

The Google blog ends with some very important aspects on creating sustainable AI: working in interdisciplinary environments and with scientific rigour have to be cornerstones of AI development for the long term.

This is a very good first attempt at ensuring us of responsible AI design principles within Google. However, the general public will only accept this and the products if Google can demonstrate for each product that the principles have guided product design. This requires transparency and an extent of explainability to instil trust in their systems.


Read more about the ethics of artificial intelligence.