With a mission to provide trustworthy AI systems, Preamble is committed to shaping the future of AI safety. Our journey began in 2020 after attending the first Joint Artificial Intelligence Center (JAIC) AI Symposium for the DoD. We got to work after realizing the DoD was looking for AI capabilities and no one had solutions for AI risks.
In 2022, we discovered a persistent vulnerability in OpenAI GPT-3, now known as a 'prompt injection'. After a private disclosure to OpenAI, we knew there would be an urgent need for comprehensive safeguards to protect against such exploits, compromising the integrity of AI operations.
Motivated by this finding, we began developing a comprehensive solution to ensure generative AI technologies' safe and secure deployment.