Free Consultation
#information security
#national security
#privacy

The New Executive Order Addresses AI Security

November 28, 2023

The New Executive Order Addresses AI Security

Biden issued an Executive Order to set standards for AI security and safety. Learn how businesses may be affected On October 30, 2023, President Biden signed an Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order is a part of the common efforts to regulate the development and use of AI-based technologies. It outlines numerous measures intended to enhance AI security and safety, privacy protections, equity, protections for consumers and workers, innovation and competition, and directs the following actions:

Although many components of the Order focus on the Federal Government’s response to AI-related issues, there are also important items for individuals and organizations to consider. In this article, we are going to explore the new standards for AI security outlined in the Executive Order, as well as anticipate the actions that businesses will be required to take to ensure secure AI development.

AI-Powered Cyberattacks

The impact of Artificial Intelligence (AI) on society raises both opportunities and challenges. Nowadays, more and more discussions revolve around the potential threats associated with AI. As Biden said at signing the order, “Deep fakes use AI-generated audio and video to smear reputations, spread fake news, and commit fraud.” However, the deep fakes are only a small part of the AI-related threats the order aims to address.

Deep fakes

Deep fake is a combination of “deep learning” and “fake media,” referring to the use of AI to craft/manipulate audio/visual media to appear authentic. This technology is already used for many purposes. For instance, it is often practiced to use deep fake videos to spread political misinformation depicting fabricated futures and playing words that have never been spoken. Deepfakes helped criminals trick a UK-based energy firm into transferring €220,000 to a Hungarian bank account in 2019, when criminals used AI-based software to impersonate a chief executive’s voice and demand a fraudulent transfer of $243,000. Finally, deep fakes have even become weaponized. Deep fakes are widely used in the Russian-Ukrainian war when both sides use the deep fake videos of top executives for misinformation and demoralization.

AI-Powered Password Cracking

Cybercriminals are employing AI to improve algorithms for guessing account passwords. It is worth noting that password-cracking algorithms existed long before the proliferation of AI. However, AI allows criminals to scale and automatize password-cracking attacks. For example, criminals often use AI in their brute-force attacks when they can quickly cycle through an immense number of password combinations, dramatically increasing the speed at which they crack passwords. The same works with acoustic side-channel attacks. AI allows cybercriminals to better analyze the distinct sound patterns produced by keyboard keystrokes which are then captured and analyzed to determine the characters being typed.

AI-Assisted Hacking

Apart from password cracking, cybercriminals also use AI to automate and enhance various hacking activities. AI algorithms enable automated vulnerability scanning, intelligent system weakness detection and exploitation, adaptive malware development, etc. For example, hackers develop malware that uses AI to continuously change its code dynamically using machine learning to bypass detection tools, change its behavior, and identify vulnerabilities. DeepLocker is an example of an AI-powered malware presented by IBM Security back in 2018. DeepLocker was presented to better understand how several existing AI models can be combined with current malware techniques. It uses AI techniques, specifically deep learning algorithms, to target specific victims and remain undetected until specific conditions are met. DeepLocker's AI capabilities make it highly sophisticated and difficult to detect by traditional detection systems.

New Standards for AI Security

To establish new standards for AI safety and security, the Executive Order has outlined the following actions that should be taken to protect Americans from the potential risks of AI systems:

Disclose critical information about AI development to the Government

Developers of the most powerful AI systems will be required to share their safety test results and other critical information with the US Government. It means that in the near future, the most advanced AI system developers such as OpenAI (GTP 3, GTP4), Antrophic, Google (Bard, Google Assistant), Apple (Siri), Amazon (Alexa), etc. would be required to conduct rigorous testing to assess the safety and reliability of their technologies. This may involve evaluating how well the AI system performs under various conditions, its ability to handle unexpected information, and its adaptation to real-world scenarios. This requirement may extend beyond safety tests and encompass details about the AI system's architecture, algorithms, and potential implications for privacy, security, and societal impact.

Develop AI security standards

Another crucial measure involves adhering to applicable standards and frameworks that guide the creation and implementation of AI systems. These may encompass sector-specific standards like ISO or NIST or even the European Commission's AI Ethics Guidelines or the OECD's AI Principles. Adherence to the existing or newly established standards would show the organization’s dedication and accountability to both stakeholders and customers, emphasizing their commitment to responsible AI development and deployment. Protect against the risks of using AI to engineer dangerous biological materials. The intersection of AI and biotechnology poses potential risks, particularly in engineering dangerous biological materials.

Protect from AI fraud

The order requires protecting Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content. This may include digital forensics tools to analyze metadata, such as file creation dates, locations, and editing history. Anomalies in this information can indicate AI-generated content. Developing and using algorithms that analyze patterns indicative of AI-generated content, such as unnatural repetitions, uniformity, or artifacts left by specific AI models may also be a good practice. Finally, it may also be recommended to use AI tools designed to detect deepfakes and synthetic media as these tools can analyze facial expressions, lip sync, and other details that might be challenging for the human eye to discern. The final AI security measures of the Executive order include establishing an advanced cybersecurity program to develop AI tools to find and fix vulnerabilities in critical software. And Order the development of a National Security Memorandum that directs further actions on AI and security.

Protect privacy

The executive order requires federal agencies to prioritize federal support for privacy-enhancing technologies to protect privacy. This requirement includes the ones that use AI systems to be trained while preserving privacy. It is also required to develop guidelines for federal agencies to evaluate privacy-enhancing technologies, which may affect private-sector businesses that provide AI and such technologies. To sum up, the Executive Order on AI is a good start, yet it is not enough to fully address AI-related cybersecurity concerns. Now it is moving the U.S. toward more comprehensive AI governance, so both Federal Governments and private businesses have much to be done. Discover more exciting topics with our blog, and feel free to contact the Planet 9 team if you have any questions. We’ll be happy to assist!

Book a Free Consultation

Schedule a free consultation today to explore how Planet 9 can help you achieve your security and compliance goals.
Book Free Consultation

FAQs

How does a PTCISO service differ from hiring a full-time CISO?
A part-time CISO offers the same strategic oversight and expertise as a full-time CISO but on a flexible, cost-effective basis. It’s ideal for small to mid-sized businesses that need executive-level guidance without the overhead.
Is a virtual CISO service suitable for regulated industries like healthcare or finance?
Yes, virtual CISOs (or fractional CISOs) are especially valuable for industries with strict compliance requirements such as HIPAA, PCI DSS, or GLBA. They help ensure your organization meets regulatory standards and is prepared for audits.
What can I expect during a vCISO engagement?
Our vCISO service typically includes cybersecurity assessments, program development, compliance planning, incident response strategy, vendor risk management, and ongoing executive reporting tailored to your business.
How do I know if my business needs a CISO-as-a-Service?
If you lack in-house security leadership, struggle with compliance, or face growing cyber risks, a vCISO can fill that gap, providing strategic direction, improving resilience, and helping you make smarter security investments.

Related blog posts