Meanwhile, the White House Office of the National Cyber Director characterized its understanding of the executive order on social media with precision: “Today’s EO establishes new standards for AI safety and security, the protection of Americans’ privacy, the advancement of equity and civil rights — it stands up for consumers and workers, promotes innovation & competition, advances American leadership around the world.”
The US Department of Homeland Security put out its own fact sheet explaining the executive order and its responsibilities, highlighting key areas:
- Formation of the AI Safety and Security Advisory Board (AISSB) to “support the responsible development of AI. This committee will bring together preeminent industry experts from AI hardware and software companies, leading research labs, critical infrastructure entities, and the U.S. government.”
- Work to develop AI safety and security guidance for use by critical infrastructure owners and operators.
- Capitalize on AI’s potential to improve U.S. cyber defense, highlighting how CISA is actively “leveraging AI and machine learning (ML) tools for threat detection, prevention, vulnerability assessments.”
Separately, the Cybersecurity and Infrastructure Security Agency emphasized in its own social media post that it will “assess possible risks related to the use of AI, provide guidance to the critical infrastructure sectors, capitalize on AI’s potential to improve US cyber defenses, and develop recommendations for red-teaming generative AI.”
Assessing the AI threat to intellectual property
The threat to intellectual property is not hypothetical and is front and center within the executive order. To bolster the protection of AI-related intellectual property, DHS, through the National Intellectual Property Rights Coordination Center “will create a program to help AI developers mitigate AI-related risk, leveraging Homeland Security Investigations, law enforcement, and industry partnerships.
While industry, in the form of IBM, chimed in with the admonishment that the “best way to address potential AI safety concerns is through open innovation. A robust open-source ecosystem with a diversity of voices — including creators, developers, and academics — will help rapidly advance the science of AI safety and foster competition in the marketplace.”
It’s now been a year since ChatGPT stormed into consumer hands and the past 12 months have been nothing short of whirlwind adoption. CISOs must, as recommended previously, ask the hard questions, and demand provenance and demonstratable test results from providers who espouse the inclusion of AI/ML in their products. While the global government initiatives are pointed in the right direction, it’s clear that it will ultimately fall on the CISO’s shoulders to determine if the arrows in their quiver are the right ones.