The AI Council has published a “roadmap” of advice for government in respect of developing a UK state strategy for artificial intelligence (AI).

Eye-catchingly, it advocates what it calls “moonshots” that “could tackle fundamental challenges such as creating ‘explainable AI’ and developing smart materials for energy storage”.

The council is a non-statutory body chaired by Tabitha Goldstaub, consisting of 20 people from academia and industry, including Wendy Hall, professor of computer science at the University of Southampton, Marc Warner, the CEO of AI consultancy firm Faculty, and Adrian Smith, chief executive of The Alan Turing Institute.

The council was launched in 2018, on the back of the government’s response to a House of Lords AI report that recommended the UK pick ethics as a realistic niche in the related fields of artificial intelligence and machine learning.

It was bolstered in 2019 with recruits from online retailer Ocado and the Independent Commission on Freedom of Information. Adrian Smith also joined at that juncture.

The 37-page roadmap has 16 recommendations for the government, across the categories of research and development, skills and diversity, infrastructure and trust, and adoption in the health, climate and defence sectors.

The roadmap document is partly based on 450 responses to a call, in October 2019, for input from what is described as an AI “ecosystem” of individuals interested in artificial intelligence.

The introduction states “we need to ‘double down’ on recent investment the UK has made in AI [and] we must look to the horizon and be adaptable to disruption”. It says the council stands ready “to convene workshops with the wider ecosystem to capture more detail and work together to ensure that a future National AI Strategy enables the whole of the UK to flourish”.

The Alan Turing Institute has a central place in the document. The council advises the government to “provide assured long-term public sector funding that will give the Turing Institute and others the confidence to plan and invest in strategic leadership for the UK in AI research, development and innovation”.

On the skills front, the council advocates a decade-long programme of “research fellowships, AI-relevant PhDs across disciplines, industry-led masters and level 7 apprenticeships”. And it suggests that tracking diversity data to “invest and ensure that underrepresented groups are given equal opportunity and included in all programmes”.

The council recommends that the UK, now independent of the European Union (EU), should lead on the ethics of algorithmic decision-making: “The UK has a crucial opportunity to become a global lead in good governance, standards and frameworks for AI and enhance bilateral cooperation with key [nation-state] actors.”

The council wants to see government support for UK AI suppliers as well as “public sector investments in AI, building capability in the use of data, analytics and AI to ensure intelligent procurement of AI as part of projects for public benefit”.

The council puts particular stress, in the latter part of the roadmap document, on healthcare – building on the work of NHSX – and cyber security.

The document states: “Building on the UK’s world-leading life sciences industry and increasingly digitally mature NHS, there is an opportunity to establish the UK as the go-to place for biomedical research. With the appropriate governance in place, and through closer collaboration of the life sciences industry and the NHS, AI has the potential to empower large national trials more efficiently, including through the seamless adaptation of subjects to treatments.”

And, in relation to defence: “The information involved [in defence and security applications] is often multi-modal (text, audio and video) and is invariably uncertain, incomplete and contradictory. Sometimes it is also intentionally misleading. AI methods are necessary to make sense of this.” The council also welcomes a new MOD agency for AI.

source