AI Security Researcher

Engineering · Full-time · Remote · Remote possible

Job description

Our mission at Robust Intelligence is to enable every organization on the planet to adopt AI securely. As the world increasingly adopts AI into automated decision processes, we inherit a great deal of risk.

Our flagship product is built to be integrated with existing AI systems to enumerate and eliminate risks caused by both unintentional and intentional (adversarial) failure modes. With Generative AI becoming increasingly popular, new vulnerabilities and attacks present a significant threat to AI companies and their consumers. Our Generative AI Firewall provides a safety net against these failure modes.

At Robust Intelligence we have built a multidisciplinary team of ML Engineers, AI security experts and software engineers to advance the state of AI security. Together, we're building the future of secure, trustworthy AI.

As an AI Security Researcher you will:

  • Track and analyze emerging threats to AI systems, focusing on AI/ML models, applications, and environments.
  • Develop and implement detection and mitigation strategies for identified threats, including prototyping new approaches.
  • Lead comprehensive red-teaming exercises and vulnerability assessments for generative AI technologies, identifying and addressing potential security vulnerabilities.
  • Develop and maintain security tools and frameworks using Python or Golang.
  • Curate and generate robust datasets for training ML models.
  • Author blog posts, white papers, or research papers related to emerging threats in AI security.
  • Collaborate with cross-functional teams of researchers and engineers to translate research ideas into product features. You'll also have the opportunity to contribute to our overall machine learning culture as an early member of the team.

What we look for:

  • 3+ years of proven experience
  • Experience on applied red and/or blue team roles, such as threat intelligence, threat hunting, red teaming, etc.
  • Strong understanding of common application security vulnerabilities and mitigations.
  • Strong programming skills in generic programming languages such as Python or Golang.
  • Excellent written and verbal communication skills, strong analytical and problem-solving skills.
  • Ability to quickly learn new technologies and concepts and to understand a wide variety of technical challenges to be solved.

Preferred qualifications:

  • Experience with AI/ML security risks such as data poisoning, privacy attacks, adversarial inputs, etc.
  • Fluency in reading academic papers on AI/ML and security and translating to prototype systems
  • Have experience with modern application stacks, infrastructure, and security tools.
  • Experience developing proof-of-concept exploits for new or theoretical attacks.

Org chart