AI could manipulate decision-making processes, Cambridge researcher says
Researchers have warned that this could likely impact ‘free and fair elections, a free press and fair market competition’
A paper published by the Cambridge Leverhulme Centre for the Future of Intelligence (LCFI) has predicted a rise in the capability of AI tools to predict online decisions and collect data on these proposed outcomes.
Dr Jonnie Penn, a historian at the LCFI, described how, “for decades attention has been the currency of the internet […] Sharing your attention with social media platforms such as Facebook and Instagram drove the online economy.”
Penn outlined what a future in which this technology is fully developed and operational may look like: “Unless regulated, the intention economy will treat your motivations as the new currency. It will be a gold rush for those who target, steer and sell human intentions.”
“We should start to consider the likely impact such a marketplace would have on human aspirations, including free and fair elections, a free press and fair market competition, before we become victims of its unintended consequences,” he added.
The paper outlines the repercussions of such development where AI outputs are adjusted according to “streams of incoming user-generated data”. Models could even “steer” the conversations to generate specific data.
The report quoted Jensen Huang, CEO of chip-company Nvidia, who said that models will “figure out what is your intention, what is your desire, what are you trying to do, given the context, and present the information to you in the best possible way.”
Mark Zuckerberg’s Meta, with its creation of an AI model Cicero, was used as an example of an AI model acquiring new levels of intelligence, bringing us a step closer to this reality. The Cicero model has learned to play Diplomacy – a game relying on the accurate predictions of gameplay – with this achievement highlighting a “human-level” of intelligence.
Using Cicero, the study predicts a future whereby Meta can auction data seeded by predictions of user decision-making. This data will be in a “highly quantified, dynamic and personalised format” unlike current forecasting of human behaviour.
Current large language models (LLMs) in the form of ChatGPT and other AI chatbots will act as a framework for predictive AI, which the paper claims will “anticipate and steer” users based on “intentional, behavioural and psychological data”.
In the future, LLMs will be able to communicate with users via bidding on ad exchanges, the paper argues. This communication could include enquiring about thoughts on seeing a particular film, or a suggestion for future recreational activities. While the answers create the data on user intentions, the AI tools’ questions are matched to a user’s “personal behavioural traces” and “psychological profile”.
The study claims that “in an intention economy, an LLM could, at low cost, leverage a user’s cadence, politics, vocabulary, age, gender, preferences for sycophancy, and so on, in concert with brokered bids, to maximise the likelihood of achieving a given aim (e.g. to sell a film ticket).”
- News / Stephen Fry and former Cambridge vice-chancellor get gongs in New Year’s honours list2 January 2025
- Comment / Tolerating anti-intellectualism supports the ‘career-ification’ of university1 January 2025
- Features / To cook or not to cook – voicing student concerns over a grave-y situation31 December 2024
- Lifestyle / Every Sidge lunch option, ranked2 January 2025
- News / 2024: A year in Cambridge news1 January 2025