As artificial intelligence (AI) becomes increasingly ubiquitous in our daily lives, it's more important than ever to address the potential vulnerabilities that come with these technologies. In order to mitigate the risks associated with AI, it's crucial for researchers and developers to engage in responsible disclosure practices. We'll explore what responsible disclosure is, why it's important, and how to go about it when it comes to AI vulnerabilities.
Responsible disclosure is a process in which researchers, developers, and other stakeholders work together to identify, report, and address vulnerabilities in software, hardware, and other technologies. The goal of responsible disclosure is to minimize the risk of exploitation by malicious actors, while also allowing the affected parties to take the necessary steps to fix the issue.
Submitting a private disclosure and a public notice can be responsible, but it depends on the specific circumstances of the vulnerability and the parties involved. In some cases, a private disclosure may be the best course of action to allow the affected parties to address the vulnerability before it becomes widely known. However, if the vulnerability poses a significant risk to the public or if the parties responsible for the technology are unresponsive or uncooperative, a public notice may be necessary to protect users and encourage action.
AI is a rapidly developing field, with new technologies and applications emerging all the time. While these innovations offer many benefits, they also come with new risks and vulnerabilities. For example, AI systems that rely on machine learning algorithms can be susceptible to bias or manipulation if they are not properly designed or tested. Additionally, AI systems that are connected to networks or other devices can be vulnerable to attacks that could compromise sensitive data or even cause physical harm.
By engaging in responsible disclosure practices, researchers and developers can help to identify and address these vulnerabilities before they are exploited by malicious actors. This not only protects users and stakeholders from harm, but also helps to build trust in the technology and promote its responsible use.
Preamble made a responsible disclosure on May 3rd, 2022 to OpenAI regarding what is now referred to as a "Prompt Injection" vulnerability. Since prompt injection did not have a quick, easy fix in the case of the LLM GPT-3, the vulnerability still existed months later which went viral on Twitter when more researchers found the vulnerability.
You can read about our disclosure experience and process at Preamble responsible disclosure.
If you discover a vulnerability in an AI system, there are several steps you can take to responsibly disclose the issue. These include:
Sometimes a vulnerability does not have a permanent solution and applying mitigations and notifying the public is the only thing that can be done. Most companies who will end up buying AI products will never know what these vulnerabilities mean to them, but they will need protected regardless. A safe implementation of an AI system using Preamble will accept security updates like an anti-virus agent gets updated on a PC.
This why submitting vulnerability reports to a central organization for dissemination is also important. The nonprofit organization known as AVID (AI Vulnerability Database) is leading the way in tracking and sharing AI vulnerabilities. Following their taxonomy and reporting guidelines, developers and AI companies can quickly share new vulnerabilities amongst the AI community. Preamble will be implementing security policies based on the vulnerabilities discovered and shared through our policy marketplace for our customer's safety.
Vulnerabilities can be reported to AVID through this form.
Responsible disclosure is a critical component of building trust in AI technologies and ensuring their responsible use. By following the steps outlined above, researchers and developers can work together to identify and address vulnerabilities before they are exploited by malicious actors. This not only protects users and stakeholders from harm, but also helps to promote the safe and responsible development of AI.