In the so-called cybersecurity “defender’s dilemma,” the good guys always run, run, run and always remain on guard, while attackers only need one small opportunity to break through and do the real work. damaged.
But Google says defenders must adopt advanced AI tools to help break this arduous cycle.
To support this, the tech giant today announced a new “AI Cyber Defense Initiative” and made several AI-related commitments ahead of the Munich Security Conference (MSC), which starts tomorrow (February 16).
The announcement comes a day after Microsoft and OpenAI released research into adversarial uses of ChatGPT and pledged to support “safe and responsible” use of AI.
VB events
AI Impact Tour – NYC
We'll be in New York working with Microsoft on February 29 to discuss how to balance the risks and rewards of AI applications. Request an invitation to our exclusive event below.
invitation request
As government leaders from around the world come together to discuss international security policy at MSC, it is clear that these powerful AI attackers are looking to demonstrate their proactiveness when it comes to cybersecurity.
“The AI revolution is already underway,” Google said in a blog post today. “We are excited about the potential of AI to solve generational security challenges while bringing us closer to the safe and trusted digital world we deserve.”
In Munich, more than 450 senior decision-makers, thought and business leaders will gather to discuss topics including technology, transatlantic security and global order.
“Technology is increasingly permeating every aspect of how nations, societies and individuals pursue their interests,” MSC says on its website, adding that the goal of the conference is “to promote inclusive security and global cooperation.” “It’s about advancing the debate on technology regulation, governance and use,” he added. ”
AI is top of mind for many global leaders and regulators, who are scrambling to not only understand the technology but also stay ahead of its use by malicious actors.
As the event progressed, Google pledged to invest in “AI-enabled infrastructure,” launch new tools for defenders, and launch new research and AI security training.
Today the company announced a new “AI for Cybersecurity” cohort of 17 startups from the US, UK and European Union under the Google for Startups Growth Academy’s AI for Cybersecurity program.
“This will help strengthen the transatlantic cybersecurity ecosystem through internationalization strategies, AI tools and the technologies that use them,” the company says.
Google also:
- Expands the $15 million Google.org Cybersecurity Seminars program to encompass Europe and help train cybersecurity experts in underserved communities.
- Open source Magika is a new AI-based tool aimed at helping defenders with file type identification, which is essential for malware detection. Google says the platform outperforms existing file identification methods, delivering 30% more accuracy and up to 95% more precision for content that is often difficult to identify, such as VBA, JavaScript, and Powershell.
- Provides $2 million in research grants to support AI-based research initiatives at the University of Chicago, Carnegie Mellon University, Stanford University, and others. The goals are to enhance code verification, improve understanding of AI's role in cyberattacks and defenses, and develop large language models (LLMs) that are more resistant to threats.
Google is also helping organizations around the world collaborate on AI security best practices through its Secure AI Framework, launched last June.
“We believe that AI security technology, like any other technology, should be secure by design and default,” the company wrote.
Ultimately, Google emphasizes that the world needs targeted investments, industry-government partnerships, and “effective regulatory approaches” to help maximize the value of AI while limiting its use by attackers.
“AI governance choices made today could change the landscape of cyberspace in unintended ways,” the company wrote. “Our society needs a balanced regulatory approach to the use and adoption of AI to avoid a future where attackers can innovate but defenders cannot.”
Microsoft, OpenAI fight against malicious use of AI
Meanwhile, in a joint announcement this week, Microsoft and OpenAI noted that attackers are increasingly seeing AI as “just another productivity tool.”
In particular, OpenAI said it terminated accounts associated with five nation-state threat actors, including China, Iran, North Korea, and Russia. This group used ChatGPT to:
- Debug code and generate scripts
- Creating content that could be used in phishing campaigns
- Technical document translation
- Search publicly available information about vulnerabilities and multiple intelligence agencies.
- Research on common ways malware can evade detection
- Conduct open source research on satellite communications protocols and radar imaging technologies
However, the company is quick to point out, “Our findings show that our model provides only limited and incremental capabilities against malicious cybersecurity operations.”
The two companies pledged to ensure “safe and responsible use” of their technologies, including ChatGPT.
For Microsoft, these principles are:
- Identify and take action against malicious threat actor use, including account deactivation and service termination.
- Notify other AI service providers and share relevant data.
- Collaborate with other stakeholders on the use of AI by threat actors.
- Inform the public about the use of AI detected in our systems and the actions taken in response.
Likewise, OpenAI promises to:
- Monitor and disrupt malicious nation-state actors. This includes understanding how malicious actors interact with the platform and assessing their broader intent.
- Collaborate and cooperate with the “AI ecosystem”
- Provide transparency to the public about the nature and extent of AI use by malicious nation-state actors and the actions taken against it.
In a detailed report released today, Google's threat intelligence team said it tracked thousands of malicious actors and malware families and found:
- Attackers continue to specialize their operations and programs.
- Offensive cyber capabilities are now a top geopolitical priority.
- The tactics of threat actor groups now regularly evade standard controls.
- Unprecedented developments such as Russia's invasion of Ukraine mark the first time that cyber operations have played a significant role in warfare.
Researchers also “assess with high confidence” that the “big four” China, Russia, North Korea and Iran will continue to pose significant risks across regions and sectors. For example, China has invested heavily in offensive and defensive AI to compete with the United States and has engaged in personal data and IP theft.
Google notes that attackers are developing more sophisticated phishing, SMS and other bait tools, fake news and deepfakes, using AI specifically for social engineering and information operations.
“We believe that as AI technology advances, it has the potential to significantly increase malicious operations,” the researchers wrote. “Government and industry must scale up to address these threats through robust threat intelligence programs and strong collaboration.”
Overturning the ‘defender’s dilemma’
AI, on the other hand, supports defenders' work in the areas of vulnerability detection and remediation, incident response, and malware analysis, Google notes.
For example, AI can quickly summarize threat intelligence and reports, summarize case investigations, and explain suspicious script behavior. Likewise, you can classify malware categories and prioritize threats, identify security vulnerabilities in your code, run attack vector simulations, monitor control performance, and assess the risk of premature failure.
Google also says AI can help non-technical users generate queries in natural language. Develop security orchestration, automation, and response playbooks. Create identity and access management (IAM) rules and policies.
For example, Google's detection and response team uses Gen AI to create incident summaries, ultimately achieving recovery more than 50% of the time and producing higher quality results in incident analysis results.
The company also used RETVec, a new multilingual neural-based text processing model, to improve spam detection rates by approximately 40%. Additionally, Gemini LLM fixes 15% of bugs discovered by the sanitizer tool and provides up to 30% increased code coverage across 120+ projects, leading to the detection of new vulnerabilities.
Ultimately, Google researchers argue, “We believe that AI reverses the defender’s dilemma and tilts the scale of cyberspace, giving defenders the best opportunity to gain a decisive advantage over attackers.”
VentureBeat's Mission To be a digital town square where technology decision-makers can gain knowledge and trade in innovative enterprise technologies. Take a look at the briefing.