The Center for Long-Term Cybersecurity (CLTC) at the University of California, Berkeley is a research and collaboration hub with a mission to help individuals and organisations address tomorrow’s information security challenges and to amplify the upside of the digital revolution.
Through its AI Security Initiative, CLTC undertakes interdisciplinary research on the global security impacts of artificial intelligence (AI). For instance, it explores how the vulnerabilities of AI systems can yield dangerous outcomes impacting the global security landscape, from the automation of cyberattacks to disinformation campaigns and new forms of warfare. The AI Security Initiative works across technical, institutional, and policy domains to support trustworthy development of AI systems today and into the future. CLTC has partnered with cyber and technology diplomats from 25 countries.
In its latest report ‒ Toward AI Security: Global Aspirations for a More Resilient Future ‒ CLTC investigates the robustness and resiliency of AI systems from a cybersecurity standpoint. The report introduces a framework for navigating AI security, analyses AI strategies and policies from ten countries, and highlights significant policy gaps, but also opportunities for co-ordination and co-operation among all surveyed nations.
In collaboration with the World Economic Forum’s Global Future Council on Cybersecurity, the CLTC and the CNA’s Institute for Public Research are part of the global initiative Cybersecurity Futures 2025. This initiative aims to shape a forward-looking AI research and policy agenda applicable across countries and regions. The goal is to help decision-makers in government, industry, and civil society reduce frictions, seize opportunities for co-operation, and better prepare for the future.
- Works to support trustworthy development of AI systems.
- Engages with AI practitioners and decision-makers to promote AI security.