Fundamentals of AI ethics with Dr. Fiona Browne

AI Ethics

Dr Fiona Browne heads Datactics’ four-strong AI team building explainable AI solutions.

Here, Fiona delves into the thorny topic of ethics, the centrepiece of any AI expedition. More specifically,  the post goes into how to ensure that the potentially negative ethical impacts of AI do not outweigh the positives it can deliver across industry and academic sectors.

AI and big data

Since 2016, the domain of AI/ML has been gathering momentum with breakthroughs in NLP and computer vision. Andrew Ng, one of the founders of Google Brain, has referred to Artificial Intelligence (AI) as “automation on steroids” and “the new electricity”. We really have come a long way since the 1950s when Alan Turing first posed the question – “Can machines think?” outlined in his seminal paper Computing Machinery and Intelligence.

AI is here and is already being applied, from email spam filters to personal assistants such as Siri or Alexa, through to social media and customer service chatbots.

One of the most interesting aspects of this technology is that it is general-purpose, and we can apply this across many diverse sectors, from agriculture to manufacturing to healthcare to finance. The potential applications are vast and can provide us with faster services, whether automating administrative tasks or developing ‘decision-aid’ tools for clinicians in analysing our medical data. This is clearly an exciting time and we can see how AI will continue to be embedded into our everyday lives from obtaining bank loans to driving our cars.

This has, quite rightly, raised ethical questions around the safety, confirmation bias and transparency of AI. Perhaps an even bigger question is: “what is an ethical AI system and how can I validate it?”

It is encouraging that such questions are being asked. We know that machine learning algorithms learn from common types of data, but if an algorithm learns from data containing bias, these data biases will persist through to predictions made. A wide range of types of bias exist, from gender bias, to selection bias. Such biases can be inherent in the data or extrinsic to it; that is, bias by the unintentional omission of data based on how the data was collected. Two excellent examples from Harvard Business Review delve deeper into this subject and are well worth taking the time to read. An interesting area is the emerging discipline of AI ethics dedicated to addressing these questions involving experts across diverse domains including philosophy, computing science, academia and government.  

We are seeing the movement of machine learning models and AI solutions into our everyday life such as facial recognition and real-time video analysis, replacing humans in the decision making process. 

These capabilities could be used for citizen protection, especially with the current contact-tracing demands of the Coronavirus situation. The key is striking the balance between what the technology can potentially do, and being responsible with this technology, so our democracy and privacy are not undermined or impacted by ethical issues and different types of data bias, 

The question then is developing models for the society we wish to inhabit, not merely replicating the society we have.

Having technologies that are built and informed by a diverse workforce with different people, different points of view is one factor that will aid in this. Initiatives such as the Organisation for Economic Co-operation and Development (OECD) have developed principles to promote the use of AI as innovative and trustworthy. The Alan Turing institute also has initiatives around fairness, transparency and ethics, with similar ethics being considered in MIT/Harvard. However, as these technologies have begun to touch our everyday lives in increasingly unseen ways, it will be important that we are all given an equal voice in this conversation. Democratising the debate on ethics in AI needs to involve greater community understanding, political guidance and policies of inclusion to prevent – and hopefully, even undo – the biases already hard-coded into human society.

Click here for more from Datactics, or find us on LinkedinTwitter or Facebook for the latest news.