Deputy CEO, techUK
In 2020, debates about the ethical use of digital technology have been front page news.
From COVID-19 and Black Lives Matter to exam results and the US election result, some of the most controversial moments of 2020 have come down to questions about how, why, when and where we should use powerful new digital technologies like AI.
Building ethical frameworks and guidelines
But these were not new questions: Over the last five years a growing community of technologists, policy makers, academics and civil society advocates around the world have been working to build frameworks and guidelines to help us navigate the most complex questions about how digital technology should be used.
When push came to shove how useful were these ethical frameworks when we had to apply them in high profile, high pressure, environments like the race to develop the contact-tracing app or the challenge of countering online disinformation?
Some argue that digital ethics has failed the test of 2020. Yet, if we look at the example of contact tracing, there was actually a very informed public debate on ethics that had a direct impact on the development of apps around the world. And throughout 2020 we have seen many examples of businesses acting upon ethical concerns, ranging from the risk of ethnic bias to the challenge of climate change. But is it now time for ‘soft’ ethics to make way for ‘hard’ regulation?
The potential societal benefit of AI is huge. But so is our responsibility to use it with care. Sound digital ethics, along with good law, will be essential to getting this right.
The importance of law and ethics in digital innovation
This may feel like a binary choice, but it is not. Good law needs to be built on good ethics. To be effective, new regulation will need to be rooted in the knowledge and understanding that has been built through the deliberation on ethics. It is also clear that regulation will always struggle to keep pace in a world of exponential change. In a world where innovators are ahead of regulators the first line of defence from doing the wrong thing is to embed a deep understanding of how to do the right thing. This is why embedding digital ethics remains vital for responsible innovation.
Institutions are providing new guidance
Without doubt there is a huge amount to learn from 2020. But there is also cause for optimism. There is now far greater awareness and understanding across the public and private sector of both the need to handle powerful technologies with care and how to do that in practice. We now have established ethical frameworks to enable us to ask the right questions and inform the right procedures. We have a set of institutions, such as the Ada Lovelace Institute and the Centre for Data Ethics and Innovation (CDEI) that can provide guidance on how to balance the risks and opportunities of AI. We have an active community of informed law makers, such as the All-Party Parliamentary Group on AI and civil society groups that are determined to hold both businesses and governments to account.
As we have seen in just in the last week with the example of DeepMind’s protein folding discovery, AI can enable us to make huge scientific breakthroughs. The potential societal benefit of AI is huge. But so is our responsibility to use it with care. Sound digital ethics, along with good law, will be essential to getting this right.