Michael Gove is drawing up proposals to turn Cambridge into “Europe’s Silicon Valley”, with plans being produced by the Department for Levelling Up, Housing and Communities in recent months “forming a blueprint” to unleash growth in the life sciences and technology sectors. This comes at a time when Cambridge is rolling on the heels of being ranked as the second-best higher education institution in the world and has seen a recent wave of success in AI research. Yet, there has been a mixed reaction across the Cambridge community to the prospects of AI: how optimistic or fearful should we be?

Ground-breaking medical applications: Cambridge uses AI to cut waiting times in NHS first

Research for Cambridge University and at Addenbrooke’s Hospital has found, in an NHS first, that AI is reducing the amount of time that cancer patients are waiting for radiotherapy treatment. The AI-driven technology allows specialists to plan radiotherapy treatment 2.5 times faster than if they were working alone. Health and Social Care Secretary Steve Barclay commented on this technological first: “Cutting-edge technology can help us reduce waiting times for cancer patients, free up time for staff so they can focus on patient care, and ultimately save lives – and artificial intelligence is playing an increasingly important role.”

“Cutting-edge technology can help us reduce waiting times for cancer patients, free up time for staff so they can focus on patient care, and ultimately save lives – and artificial intelligence is playing an increasingly important role”

Director of the Cambridge Centre for AI in Medicine Mihaela van der Schaar recently suggested in The Guardian that the prospects of AI “designed specifically for real-world medicine – with all its organisational, scientific, and economic complexity” is the type of “reality-centric” approach to AI that we should be pursuing. “AI-powered personalised medicine could allow for more effective treatment of common conditions such as heart disease and cancer, or rarer diseases such as cystic fibrosis. It could allow clinicians to optimise the timing and dosage of medication for individual patients, or screen patients using their individual health profiles, rather than the current blanket criteria of age and sex,” she continued.

The scope of Cambridge-based AI research is far-reaching. The Cambridge ALTA Institute uses AI and machine learning to help second-language learners improve their English abilities more effectively by providing users with “detailed diagnostic feedback” – though they are keen to note that they “do not believe that automated assessment should replace class teachers and examiners anytime soon”. In other AI-related success, Grzegorz Sochacki, from Professor Fumiya Iida’s Bio-Inspired Robotics Laboratory in the Department of Engineering, wanted “to see whether we could train a robot chef to learn in the same incremental way that humans can – by identifying the ingredients and how they go together in the dish” using computer vision.

Concerns over AI safety: CompSci professor calls for better AI regulation

Last month, DeepMind Professor of Machine Learning in the Department of Computer Science Neil Lawrence argued in an opinion column in The Times that relying on BigTech to tell us what the future of AI technology should look like “is like turkeys asking the farmer what they should be eating for Christmas dinner”. He says: “In many respects, machines are superior to us.” He is critical of BigTech joining a current wave of interest in AI safety – initiated by a visit of his colleague Geoff Hinton to Rishi Sunak’s senior advisor in No 10 about how AI represents a serious threat.

“Relying on BigTech to tell us what the future of AI technology should look like ‘is like turkeys asking the farmer what they should be eating for Christmas dinner’”

Companies are increasingly hopping onto the AI safety bandwagon and are hoping to “encourage the creation of regulatory barriers to hinder any new market entrants, looking to use AI to secure a slice of the action”, Lawrence suggested.

While his colleague Hinton is worried we will be ultimately “manipulated by the machine”, Lawrence suggests we should really be more fearful of the corporate machine. Like many other Cambridge researchers, he is keen to develop a set of safety protocols to ensure the secure governance of AI technology in the face of the rapid monopolisation of AI technology. OpenAI, the company behind ChatGPT, released a “bigger and better model” called GPT-4 earlier this year but, as an article in The MIT Technology Review puts it, this is “the most secretive release the company has ever put out, marking its full transition from nonprofit research lab to for-profit tech firm”. Researchers from Cambridge’s Minderoo Centre for Technology and Democracy last month joined a £31 million consortium to create a UK-wide ecosystem to create responsible and trustworthy AI. “We will work to link Britain’s world-leading responsible AI ecosystem and lead a national conversation around AI, to ensure that responsible and trustworthy AI can power benefits for everyone,” says Executive Director Gina Neff.


READ MORE

Mountain View

Cambridge signs up to new AI rules

AI is still very different to how humans behave, says head of Linguistics.

The University is beginning to adjust to the wider adoption of AI – a Varsity survey earlier this year revealed 47.3% of students have used AI chatbots to assist with their supervision work. Just as Bhaskar Vira, Cambridge’s pro-vice-chancellor for education , told Varsity that bans on AI software are not “sensible” in higher education and university assessment, Mihaela van der Schaar has called for what she calls a “human-AI empowerment agenda”, where we should not aim to “construct autonomous agents that can mimic and supplant humans but to develop machine learning that allows humans to improve their cognitive and introspective abilities, enabling them to become better learners and decision-makers.”

However, claims that large language models can somehow mimic or supplant human cognitive abilities – passing the Turing Test, named after King’s College alum Alan Turing – has elicited a great deal of scepticism and, in some cases, strong backlash from cognitive scientists and linguists. The head of the Section of Theoretical and Applied Linguistics, Ian Roberts, wrote an influential column in The New York Times co-authored with eminent linguist and contrarian Noam Chomsky – who still remains an influential figure in the discipline of Linguistics, despite his more recent controversies, including financial dealings with Jeffrey Epstein – of ChatGPT that “given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity.”

How susceptible are you to misinformation?

The rise of “large language models” means that more and more of the text on the internet will be AI-generated and not written by a human – even a Varsity article can be written by ChatGPT as I demonstrated earlier this year!

Cambridge psychologists have developed a test, available to try out here, that gives you a “solid indication of how vulnerable a person is to being duped by the kind of fabricated news that is flooding online space”. The two-minute test provides you with 20 headlines and a resilience ranking. Head of the Cambridge Social Decision-Making Lab, Professor Sander van der Linden, said: “Misinformation is one of the biggest challenges facing democracies in the digital age.” They found worst performers were under-30s who spent the most time online. You should probably stick, at least for now, to getting your news from a Varsity article written by a human instead!

Overall, the Cambridge technology ecosystem is in many ways a microcosm of society at large, coming to grips with and innovating new technology, which can be both tremendously useful – as the increasing application of AI in healthcare demonstrates – and equally harmful. Cambridge academics are only beginning to actively investigate and collaborate with industry partners and policy-makers to address new cybersecurity and misinformation crises that AI is ever-increasingly posing.