Responsible Disclosure of AI Vulnerabilities

As artificial intelligence (AI) becomes increasingly ubiquitous in our daily lives, it's more important than ever to address the potential vulnerabilities that come with these technologies. In order to mitigate the risks associated with AI, it's crucial for researchers and developers to engage in responsible disclosure practices. We'll explore what responsible disclosure is, why it's important, and how to go about it when it comes to AI vulnerabilities.

What is Responsible Disclosure?

Responsible disclosure is a process in which researchers, developers, and other stakeholders work together to identify, report, and address vulnerabilities in software, hardware, and other technologies. The goal of responsible disclosure is to minimize the risk of exploitation by malicious actors, while also allowing the affected parties to take the necessary steps to fix the issue.

Private or Public Disclosure

Submitting a private disclosure and a public notice can be responsible, but it depends on the specific circumstances of the vulnerability and the parties involved. In some cases, a private disclosure may be the best course of action to allow the affected parties to address the vulnerability before it becomes widely known. However, if the vulnerability poses a significant risk to the public or if the parties responsible for the technology are unresponsive or uncooperative, a public notice may be necessary to protect users and encourage action.

Why is Responsible Disclosure Important for AI?

AI is a rapidly developing field, with new technologies and applications emerging all the time. While these innovations offer many benefits, they also come with new risks and vulnerabilities. For example, AI systems that rely on machine learning algorithms can be susceptible to bias or manipulation if they are not properly designed or tested. Additionally, AI systems that are connected to networks or other devices can be vulnerable to attacks that could compromise sensitive data or even cause physical harm.

By engaging in responsible disclosure practices, researchers and developers can help to identify and address these vulnerabilities before they are exploited by malicious actors. This not only protects users and stakeholders from harm, but also helps to build trust in the technology and promote its responsible use.

Preamble's Experience With Vulnerability Disclosure

Preamble made a responsible disclosure on May 3rd, 2022 to OpenAI regarding what is now referred to as a "Prompt Injection" vulnerability. Since prompt injection did not have a quick, easy fix in the case of the LLM GPT-3, the vulnerability still existed months later which went viral on Twitter when more researchers found the vulnerability.

You can read about our disclosure experience and process at Preamble responsible disclosure.

How to Conduct a Responsible Disclosure for AI Vulnerabilities?

If you discover a vulnerability in an AI system, there are several steps you can take to responsibly disclose the issue. These include:

  1. Verify the Vulnerability: Before reporting a vulnerability, it's important to verify that it is a legitimate issue and not a false positive. This can involve conducting additional tests or experiments to confirm the vulnerability.
  2. Find the Right Point of Contact: Once you have confirmed the vulnerability, you need to identify the appropriate point of contact to report it to. This could be the vendor or manufacturer of the technology, a security researcher or organization, or a government agency.
  3. Draft a Report: Your report should include a detailed description of the vulnerability, including the steps needed to reproduce it, as well as any potential consequences of exploitation. You should also include any relevant evidence or data that supports your findings.
  4. Submit the Report: Once you have drafted your report, you can submit it to the appropriate point of contact. It's important to clearly explain the urgency of the issue and provide a deadline for response.
  5. Allow Time for Response: After submitting your report, it's important to allow the vendor or manufacturer time to respond and address the issue. This may involve coordinating with the vendor to develop a fix, or working with a security researcher to develop a mitigation strategy.
  6. Monitor the Response: Once a fix or mitigation strategy has been developed, it's important to monitor the response to ensure that the issue has been fully addressed. This may involve retesting the system or conducting additional analysis to confirm that the vulnerability has been resolved.
Public Disclosure: Submit to AI Vulnerability Database

Sometimes a vulnerability does not have a permanent solution and applying mitigations and notifying the public is the only thing that can be done. Most companies who will end up buying AI products will never know what these vulnerabilities mean to them, but they will need protected regardless. A safe implementation of an AI system using Preamble will accept security updates like an anti-virus agent gets updated on a PC.

This why submitting vulnerability reports to a central organization for dissemination is also important. The nonprofit organization known as AVID (AI Vulnerability Database) is leading the way in tracking and sharing AI vulnerabilities. Following their taxonomy and reporting guidelines, developers and AI companies can quickly share new vulnerabilities amongst the AI community. Preamble will be implementing security policies based on the vulnerabilities discovered and shared through our policy marketplace for our customer's safety.

Vulnerabilities can be reported to AVID through this form.

Conclusion

Responsible disclosure is a critical component of building trust in AI technologies and ensuring their responsible use. By following the steps outlined above, researchers and developers can work together to identify and address vulnerabilities before they are exploited by malicious actors. This not only protects users and stakeholders from harm, but also helps to promote the safe and responsible development of AI.

Background shape

Get our updates.

We’ll notify our community of new libraries and AI tools released.

Worry less and do more with secure AI