Hundreds turned up to hear academics' take on Artificial Intelligence Daniel Gayne

Billed as an evening of expert discussion, organiser Clemens Ollinger-Guptara was expecting ‘Is artificial consciousness possible?’ to be an intimate “fireside chat” with two specialists in the field. But a change of venue and 2,100 people registered as ‘interested’ on Facebook later, a snaking line of hundreds shuffled into Lady Mitchell Hall.

Murray Shanahan, Professor of Cognitive Robotics at Imperial College, and Tim Crane, head of the Faculty of Philosophy at the University of Cambridge, probably aren’t the kind of household names that would fill a hall of 500.

But Artificial Intelligence (AI) has become something of a trendy topic of late. While specialists rave about new horizons, famous figures like Tesla CEO Elon Musk and physicist Stephen Hawking have warned of the technology’s dangerous potential. It’s this thrilling combination of fear and anticipation which piqued the interest of experts and laymen alike.

These anxieties have seeped into popular culture too. Alex Garland’s 2015 sci-fi thriller Ex-Machina told the story of a young programmer who is selected to participate in a ground-breaking experiment in synthetic intelligence by evaluating the human qualities of a breath-taking humanoid AI. And this is where Murray Shanahan, who was science advisor, began. He offered not an answer to the question of whether a machine could be conscious, but whether it could pass the ‘Garland Test’ featured in the film.

“The real test is to show you that she’s a robot and then see if you still feel she has consciousness”, protagonist Nathan says in the film, and this is how Shanahan thinks the problem should be judged. In the film, Ava passes the test with morbid results. But in sharing an extract from a final cut scene of Ex-Machina, in which we come to understand that Ava experiences the world in a completely alien way to us, and Shanahan uses this to suggest the possibility that if AI were to be conscious, they would be “conscious exotica”, or a “creature with greater capacity for consciousness than us”.

Tim Crane took the stage with a promise to “pour cold water on everything Murray has just said”. He began by commending AI’s achievements, which proved many of its early sceptics wrong, citing the most recent triumph, AlphaGo’s victory over a human player at the incredibly complex game ‘Go’.

But despite this fundamental progress in sophistication, Crane was sceptical of projections which suggested AI would soon develop consciousness. He noted how the creators of Watson, the computer which won at the game show Jeopardy!, had boasted of its ability to observe, interpret, evaluate, and decide like humans by reading millions of unstructured documents every second. “I can read a lot of unstructured documents, I’m head of a philosophy faculty”, he joked, while pointing out that no human is capable of this.

So existing AI can imitate humanity, but it does not possess general intelligence, simply huge computing power. For this reason, the Turing Test, or Shanahan’s ‘Garland Test’, are both insufficient for addressing the issue of consciousness, since they only show whether we can be fooled.

So is artificial consciousness possible? Yes, concludes Crane, while warning that his is “a boring answer”. In principle, consciousness could be replicated by building a human being, but “not only is AI not doing this, it shouldn’t be doing this”.

The speeches were followed by a Q&A in which a student, who was doing a dissertation on the theory of minds, commented that the problem of verifying consciousness which the Turing Test attempts to solve was “more of a metaphysical problem”, noting solipsistic ally that there is no way for us to confirm any other human being’s consciousness.

Ultimately, there was a great deal of agreement between the two thinkers, if no concrete answers. They concurred that the real challenge for AI is in achieving the general intelligence that defines living things from rats to humans, with Shanahan saying: “I don’t think you have to have consciousness to have superintelligence”.

The evening drew to the end after a few more questions. A second-year PBS student told me that, while he had found Shanahan’s contributions “new and interesting”, he felt that the huge audience reduced the quality of discussion, noting that they didn’t have the time to explain anything properly which limited them, while they couldn’t control audience question quality”.

While there were no solid answers by the end of the night, the field will not be lacking in funding in the years to come, with Cambridge rivalling students in enthusiasm. The University has just established the Leverhulme Centre for the Future of Intelligence thanks to a £10 million grant from the Leverhulme Trust. Since Wittgenstein and Turing, Cambridge has been the place to study consciousness. And with such huge interest in the topic, it looks like it’s only just getting started.