The Open Data Institute (ODI) and UK government have announced “data trust” pilots to explore whether this model can help to rebuild trust in data and provide fair, open access. The announcement was made at this week’s ODI Summit in London. It was particularly topical as it came less than a week after Google unveiled its intention to bring its heathcare-focused subsidiary, DeepMind Health, into the main arm of the organisation, sending waves of concern through the sector.
The ODI pilots will test the feasibility of creating legal structures to provide third-party stewardship of data. While the precise focus of the pilots is to be decided, example data areas cited are related to cities, the environment, biodiversity and transport.
The not-for-profit ODI was co-founded in 2012 by the inventor of the web, Sir Tim Berners-Lee, and artificial intelligence expert, Sir Nigel Shadbolt. When data is generated and pooled, it can have a broad benefit, said Shadbolt, but it should be used and analysed based on very clear rules. Applying the trust law model to data has the potential to achieve this, he felt, but so too other data access models.
Data trusts were proposed within the government’s UK Industrial Strategy, following on from the 2017 Independent Review of AI, which identified a market failure in access to data. Regulations might also have a role, as with the UK’s Open Banking initiative, with something similar now being planned for the energy market. Here, institutions are forced to open up their systems and data to improve competition and innovation. In total, said Shadbolt, there are probably around 20 different data access models.
For data trusts, the ODI will also work on a separate pilot with the Mayor of London’s office and the Royal Borough of Greenwich, within the Smarter London Together initiative. This pilot will focus on real-time data from Internet of Things (IoT) sensors, looking to leverage this to support technology innovators to create solutions for city challenges. Future data trust work is likely to use the Urban Sharing Platform that is being developed by the Mayor of London’s office and Greenwich to collect and share live city-related data.
The ODI is also calling for geospatial data to be part of the open data infrastructure, responding along these lines to the UK’s Geospatial Commission’s call for evidence for its Geospatial Strategy, which is due to be published next year. The strategy will tackle access and openness at both government agencies and private organisations. In the latter category, most obviously there is Google Maps, which has recently imposed a steep price hike for commercial use of its data, but others that charge include entities such as the privatised Royal Mail.
Ensuring data from public and private sectors is openly available and interoperable would allow organisations to use this to build new services and technologies. Autonomous vehicles, drones and other transport services will have particular reliance on geospatial data. Commercial satellite imagery aids planning and response to disasters; earth observation data allows the tracking of populations by human rights campaigners or deforestation by environmentalists; and supply chain data can bring transparency to the origins, movement and delivery of goods, such as food.
Shadbolt emphasised that the aim is not to destroy commercial value or confidentiality but to ensure there is not monopolistic behaviour. Jeni Tennison, ODI CEO, added that an independent advisory group is also planned, of individuals and organisations, to explore and share ideas about data trusts and other models across the globe.
Berners-Lee told the ODI Summit audience that it was now people on the street, not just people such as those in the Kings Place auditorium, who were now concerned about data privacy. Similarly, Steve Wood, deputy information commissioner at the Information Commissioner’s Office, felt there had been a “mainstreaming” of awareness about data rights. A “fantastic model” he feels is that of citizens’ juries adopted by the Connected Health Cities (CHC) programme, whereby two juries each of 18 citizens met for four days in Manchester and York to consider the acceptability of planned and potential uses of health data.
Data privacy is “very abstract” said behavioural scientist, Pat Fagan. “The natural disposition is to trust but once violated, that’s it.” There is increasing concern about the impact on society. Catherine Miller, director of policy at think-tank, Doteveryone, cited its People, Power and Tech survey, carried out in December 2017. Even before the Cambridge Analytica scandal, while half of respondents felt the internet had brought a very positive effect on their lives, only 12 per cent felt there had been a similar effect on society.
Miller said that if Cambridge Analytica scraped her data then, while this had no personal impact, “I am very concerned about how my aggregated data is changing the way society functions”. In a fair, inclusive, democratic society, “how do we make responsible tech the normal”, she asked. An Office for Responsible Technology might help, providing a coherent vision and identifying and filling gaps in the regulatory landscape, she felt.
Berners-Lee pointed out that, at present, if someone illegally uses data ahead of an election, there is little comeback after the result, beyond relatively small fines. He felt the backlash had brought some positives, with heightened awareness as well as a willingness to do something about it – he cited the spike in readership of the New York Times in the face of fake news.
Miller added that, fundamentally, trust has been eroded by companies using data in untrustworthy ways. She referenced the “Mark Zuckerberg apology tour” and the current Google/DeepMind Health situation. Andrew Eland, engineering lead at DeepMind Health, accepted it is “completely reasonable” that people are concerned but said mixing health data and advertising is unacceptable and he pointed out that, ultimately, there are also legal safeguards.
Critics say Google has broken its promise that NHS data will never be connected to Google accounts or services and they are also concerned at the closure of the independent review board that oversaw the company’s work with the healthcare sector.
The increasing prevalence of algorithms is also raising a lot of questions about transparency, bias and ethics. “Technology is becoming easier and easier to use… but harder and harder to understand,” said Miller.
The topic is certainly moving up the agenda of the UK government, as reflected in the creation of a Centre for Data Ethics and Innovation, plans for a data ethics framework for public sector data, and plans for a National Data Strategy to complement the existing digital strategy.
Roger Taylor, relatively newly appointed chair of the Centre for Data Ethics and Innovation, does not anticipate the creation of a new regulator, as the issues impact all sectors, so the role is likely to be supporting other regulators, markets and public sector institutions, identifying gaps and threats, prioritising issues and potentially looking at new regulations, he said.