Data Protection and AI: how to apply the data protection principles to the use of information in AI systems.

Speaker: Alister Pearson, Principal Policy Advisor for AI and Data Science, Information Commissioner’s Office (ICO).

Abstract: Artificial Intelligence (AI) has the potential to bring significant benefits for researchers. LUSTRE has identified that AI can be used to identify sensitive materials in a mass of born-digital records to make non-sensitive materials accessible and can also serve to search vast amounts of data when keyword searches would not be effective. However, the use of AI can also introduce or exacerbate risks to people’s right to privacy as well as other rights that relate to the processing of their personal information. In this presentation, I will discuss some of the main data protection risks of using AI in a research context, as well as ways to mitigate these risks. This will include considering risks during both the development and deployment of AI. My presentation will also include the ways that the Information Commissioner’s Office can further help researchers via the different services that we offer, including our recently launched Innovation Advice service. One of the aims of the presentation is to illustrate that data protection provides a framework to process people’s personal data, rather than a barrier

Bio: Alister Pearson is a Principal Policy Advisor for the AI and Data Science Team at the Information Commissioner’s Office (ICO). He was part of the team that produced the ICO’s Guidance on AI and Data Protection and led on the development of the AI and data protection risk toolkit, which won the Accountability prize at the Global Privacy Assembly Awards 2022.