AI's Benefits Can Only Be Realised If We Acknowledge the Risks: Professor Arvind Narayanan
Professor Arvind Narayanan, speaking at the Hindustan Times Leadership Summit, emphasises the importance of understanding AI's risks to fully benefit from its capabilities, highlighting concerns like deepfakes, predictive AI, and the need for regulation.
HT Summit
The rapid rise of artificial intelligence (AI), with millions relying on generative tools like chatbots and image generators, comes at a significant cost. Princeton University professor Arvind Narayanan, in his book AI Snake Oil, warns that this surge in AI usage presents a "major societal problem," as many consumers remain unaware of the technology’s potential dangers.
Speaking at the virtual session of the 22nd Hindustan Times Leadership Summit, Narayanan raised critical concerns regarding AI’s growing influence. These included mitigating risks, safeguarding against AI-driven decisions in areas like loans, employment, and criminal justice, and the urgent need for institutional oversight and regulation, especially in the case of deepfakes.
“AI being available to the public isn’t inherently bad. For the first time, powerful AI tools are accessible to individuals, not just corporations and governments. That’s largely a positive shift. However, we can only reap the benefits if we are acutely aware of the risks,” said Narayanan.
Narayanan's research focuses on tempering the often overenthusiastic promotion of new technologies without adequate safeguards. Regarding AI, he noted that the risks may not be immediately apparent to all users.
“People need more information about these systems’ limitations, and companies must be transparent about them. In cases like deepfakes, regulation is essential. It’s not enough to rely on responsible use—there will always be bad actors,” he explained.
He pointed to the misuse of AI in creating non-consensual nude images, a growing issue largely affecting women. “AI is being used to create non-consensual images, impacting thousands of women worldwide. There are many potential misuses, and while individual responsibility plays a part, companies need to improve their products,” Narayanan said.
Another significant concern was predictive AI. Unlike the fictional AI in Minority Report, real-world predictive AI applications, such as those used in job recruitment, loan approval, and criminal justice, can be problematic. “Predictive AI is making consequential decisions, like determining who will commit crimes or repay loans. These systems are flawed and often unjust,” he warned.
Narayanan stressed the importance of caution when it comes to such technologies, advocating for more regulation and careful consideration by companies using these systems.
On addressing issues like bias, hallucinations, and factual inaccuracies in AI, Narayanan acknowledged progress in reducing bias through more diverse training data. However, the problem of “hallucinations”—AI generating incorrect or misleading information—remains a significant challenge. “Chatbots that summarise web information, rather than relying on memory, help reduce hallucinations, but they don’t eliminate them,” he noted.
Narayanan pointed out that the problem of generative AI producing incorrect information, whether due to uncontextualised prompts, outdated data, or hallucinations, is “getting out of hand.” He emphasised that addressing this issue requires collective action, as the timeline for resolving these problems remains uncertain. “I don’t know when, or if, it will be solved, so we must exercise caution,” he added.
Currently, while countries debate AI regulation, only self-regulation exists. One example is Adobe’s Content Credentials initiative, which tracks generated content ownership and metadata and has garnered support from Microsoft, Qualcomm, Leica, Nikon, and Shutterstock.
“Regulating AI is possible, but we must recognise that AI isn’t a monolithic entity,” Narayanan explained. AI powers not only generative tools but also social media feeds, self-driving cars, and other technologies, meaning regulation must be tailored to address the unique challenges posed by each application.
Narayanan cited examples of existing regulations, such as the strict rules governing self-driving cars in many regions and the regulations surrounding AI in banking, as banking is already highly regulated. “The best approach isn’t to focus solely on AI but on the harms we want to mitigate,” he concluded.
Whether regulators will adopt this approach remains to be seen.