What is ‘good’ technology? A conversation with research analysts at the Centre for the Future of Intelligence
Varsity sits down with research analysts and podcasters Eleanor Drage and Kerry McInerney to talk AI and the future of intelligence
When I sit down to speak with Eleanor Drage and Kerry McInerney, it is over Zoom. You’d think that this far post-pandemic I would have overcome the awkwardness of talking to my computer screen alone in my room, but no. Yet, as research associates at the Centre for the Future of Intelligence, Eleanor and Kerry are anything but intimidated and, as I’m about to find out, there is a lot more at stake in the discussion of ethics and technology than just a breach of personal boundaries.
Kerry begins by introducing their research into feminist approaches towards the deployment of AI and related policies. She tells me that the most “fun” part of her work is running their podcast about gender and technology, The Good Robot, which tries to “reconcile the fact that people are really different and have really different views” about what makes “good technology”. I’m intrigued by this intersection between technology and feminism, and when I push further into their definitions, the expansive focus of their work becomes all the more apparent. “For me personally,” Kerry says: “I see feminism as a movement that is trying to bring about a world where we live fully and expansively and without violence and without fear”. Nevertheless, Kerry is keen to point out that their work does not subscribe to one definition and incorporates a diverse “plurality of feminisms”. Eleanor explains how this is a way of “anchoring pro-justice movements” to one that is more mainstream. Ultimately, their work tries to find ways of “making life liveable for everyone”.
So what is good technology?
After getting some clarification, I am keen to turn the tables and find out their more simplified answer to the enormous question they pose to guests on their podcast. Eleanor laughs and says that in academia people tend to go for the “big conceptual answers” but luckily she “prefer[s] the smaller answers”. She tells me that “it’s important not to detract too much from the things that are really essential to people’s lives”. She gives the example of a blood sugar monitor for someone with diabetes to function correctly, sync to their body and so on. Again, it’s about “making life not only liveable but possible”.
"If you don’t have something sensationalist to say about AI, it’s just not going to make it into the paper"
Can AI erase bias in the admissions process?
I’m interviewing them as a university student – and the conversation quickly turns to the issue of using AI to eradicate human bias, especially in admission settings. Kerry explains that we need to be more sceptical of developers’ claims that these tools are able to prompt diversity and “de-bias the hiring process” by removing gender and race from the equation. She thinks that this is based on a “fundamental misunderstanding of what gender and race are”, suggesting that this belief ignores how “discrimination can still be manifesting along gendered and racialised lines even if those characteristics aren’t supposedly present”. Eleanor then tells me how these hiring tools have repeatedly proven invalid. She tells me about one claiming to be able to assess your personality by analysing an image of your face. It was proven that in adjusting the brightness of the photo people’s personalities were completely reconfigured (see here!). Attempting to reverse the fundamental biases and inaccuracies of these tools will take “more interdisciplinary collaboration”, Eleanor suggests.
What does the media get wrong?
Having already learnt so much, I am keen to find out what Eleanor and Kerry think about how the media represents the issues they’re working on. “If you don’t have something sensationalist to say about AI, it’s just not going to make it into the paper,” Eleanor says. She tells me about a workshop she’s been running for journalists in an attempt to combat the prevalence of this “existential risk Terminator narrative” which “detracts from the harmful effects [of AI] on people everyday”. She questions the publicity afforded to the “classic bunch of AI pale male blokes” who create the narrative that they are “these geniuses who have created digital minds” when in fact there is no evidence to support such a claim. “It’s just a publicity stunt” she suggests, turning AI into this “unbelievable thing” to disguise its link to the more mundane effects of structural discrimination.
Kerry is also keen to point out that the media is selective about what it represents. Not only do these narratives ignore the “labour and exploitation” of vulnerable people, she suggests that they fail to “engage with the environmental costs of AI”. A twenty-minute conversation with ChatGPT, she tells me, is the equivalent to pouring 500ml of water onto the ground.
What can we do?
There are clearly many issues with AI on a systemic level, but Eleanor is optimistic that the next generation of “proactive” students can make a difference. “We need lots of voices and lots of perspectives and lots of expertise, so don’t let yourself get pushed out.” she says. For those of us whose future may not lie in AI, Kerry reminds us “consider the trade offs” of using certain AI, and while she does not think it should be “the individual consumer’s responsibility to be holding companies accountable”, we need to recognise the risks posed to vulnerable groups. Their final message for us all is to “get in touch”, “bring people together across subjects” and “be aware”.
- Lifestyle / How to survive a visit from a home friend19 November 2024
- Comment / Cambridge’s LinkedIn culture has changed the meaning of connection15 November 2024
- Comment / Give humanities students a pathway to academia15 November 2024
- Comment / Cambridge hasn’t been infantilised, it’s grown up15 November 2024
- Features / Vintage Varsity: the gowns they are a-going15 November 2024