In contrast to distant fears that technology will destroy humanity, last year's flood of AI-generated fakes and the specific risks posed by the automation of copywriting and customer service tasks have come into focus. This debate has taken on new urgency amid global efforts to regulate rapidly evolving technologies.
“This past year has seen some ‘tremendous’ conversations,” Chris Padilla, IBM’s vice president of government and regulatory affairs, said in an interview. “Now what is the risk? How do we make AI trustworthy?”
The topic took over the meeting. Panels featuring AI CEOs including Sam Altman are the hottest tickets in town, and tech giants including Salesforce and IBM have plastered the snowy streets with ads for trustworthy AI.
But growing anxiety about the risks of AI is hampering the tech industry's marketing blitz.
The event kicked off on Tuesday with Swiss President Viola Amherd calling for “global governance of AI”, with numerous countries voting to raise concerns that the technology could overestimate disinformation. At a chic café set up across the street by Microsoft, CEO Satya Nadella tried to allay concerns that the AI revolution would leave the world's poorest people behind. Social tension. Over canapés and cocktails on the streets of the Alpine Inn, Google CFO Ruth Porat pledged to work with policymakers to “develop responsible regulation” and touted the company's investment in employee reskilling efforts.
But as efforts to align global strategies on technology are hampered by economic tensions between the United States and China, the world's top AI powers, calls for a response have exposed the limits of the annual summit.
Meanwhile, countries have competing geopolitical interests when it comes to AI regulation. Western governments are reviewing rules that could benefit companies in their countries, while leaders in India, South America and other parts of the Global South see the technology as the key to unlocking it. Economic prosperity.
The AI debate is a microcosm of the broader paradox looming over Davos, where attendees don snow boots and sample expensive wines, go on sleigh rides and belt out classic rock hits in a piano lounge sponsored by cybersecurity firm Cloudflare. . The relevance of the conference, founded more than 50 years ago to promote globalization during the Cold War, is increasingly being questioned amid raging wars in Ukraine and the Middle East, rising populism and climate threats.
U.N. Secretary-General Antonio Guterres raised the dual risks of climate disruption and generative AI in a speech Wednesday, noting that they were “thoroughly discussed” at Davos.
“But we do not yet have an effective global strategy to deal with either,” he said. “Geopolitical divides are preventing us from coming together around global solutions.”
It's clear that tech companies aren't waiting for governments to catch up, and that established banks, media companies and accounting firms in Davos are thinking about how to integrate AI into their businesses.
Davos regulars say the growing investment in AI is evident on the promenade, where companies are taking over stores to host meetings and events. Recently, buzzwords like Web3, blockchain, and cryptocurrency have dominated these stores. But this year, programming has shifted to AI. Hewlett-Packard Enterprise and G42, an Emirati company, also sponsored the “AI House.” The house was a chalet-style building converted into a gathering place to host speeches by Meta senior AI scientist Yann LeCun, IBM CEO Arvind Krishna, and MIT professor Max Tegmark, among others.
The promenade effectively serves as a “focus group for the next wave of emerging technologies,” said Dante Disparte, a veteran WEF participant and Circle’s chief strategy officer and head of global policy.
Executives said AI will become an increasingly influential force in 2024 as companies build more advanced AI models and developers use these systems to power new products. During a panel hosted by Axios, Altman said the overall intelligence of OpenAI models is “growing across the board.” He predicted that in the long term, this technology will “significantly accelerate the pace of scientific discovery.”
But he said he worries that even as the company moves forward, politicians or bad actors could abuse the technology to influence elections. He said OpenAI doesn't yet know what election threats will arise this year, but will move quickly to make changes and work with external partners. As the conference kicked off on Monday, the company announced a series of election protection measures, including a commitment to help people identify when an image was created by its generator, DALL-E.
“I’m nervous about this and I think it’s good for us to be nervous about this,” he said.
With fewer than 1,000 employees, OpenAI has a much smaller election operations team compared to larger social media companies like Meta and TikTok. Altman defended their commitment to election security, saying team size isn't the best way to measure the company's work in this area. But the Washington Post found last year that the company did not enforce existing policies against political targeting.
Policymakers are concerned that companies are not thinking enough about the social impact of their products. At the same event, Eva Maydell, a member of the European Parliament, said she was developing recommendations for AI companies ahead of the global elections.
“The theme of this year’s annual meeting is restoring trust.” said Maydell, who is part of the bloc's AI bill, which is expected to become law this year following a political agreement in December. “I hope this won’t be the year we lose trust in the democratic process because of misinformation and the inability to account for the truth.”