Software Engineer, AI Security

Engineering · Full-time · San Francisco, United States

Job description

Robust Intelligence's mission is to eliminate AI Risk. As the world increasingly adopts AI into automated decision processes, we inherit great risk. 

Our flagship product is built to be integrated with existing AI systems to enumerate and eliminate risks caused by unintentional and intentional (adversarial) failure modes. With Generative AI becoming increasingly popular, new vulnerabilities and attacks present a significant threat to AI companies and their consumers. Our Generative AI Firewall provides a safety net against these failure modes.

At Robust Intelligence, we have built a multidisciplinary team of ML Engineers, AI security experts, and software engineers to advance the state of AI security. Together, we're building the future of secure, trustworthy AI.

About The Role

We are seeking a passionate and dynamic Software Engineer, AI Security with experience in Machine Learning (ML) to join our cutting-edge AI security team. This role involves tracking emerging threats to AI systems, developing detections or mitigations for those threats, and contributing to the security of next-generation AI technologies. The ideal candidate will be at the intersection of AI and cyber security, eager to explore the landscape of AI vulnerabilities, and contribute to securing AI against evolving threats.

As a Software Engineer, AI Security you will:

  • Become an expert researcher in emerging threats to AI systems, focusing on AI/ML models, applications, and environments.
  • Collaborate with senior team members to develop and implement detection and mitigation strategies for identified threats.
  • Help train state-of-the-art robust ML models for critical security tasks.
  • Participate in red-teaming exercises and vulnerability assessments of generative AI technologies to identify potential security and safety weaknesses.
  • Support the team in scripting and automation tasks using Python to enhance our security frameworks, toolsets, and demos.
  • Work with AI Security and ML engineers and researchers to translate research ideas into product features. You'll also have the opportunity to contribute to our overall machine learning culture as an early member of the team.

What we look for:

  • Degree in Computer Science, Artificial Intelligence, or Applied Math
  • 1-3 years of work experience in tech industry
  • Background in AI, machine learning, and deep learning.
  • Understanding of generative AI models/applications and their potential security implications.
  • Strong programming skills in generic programming languages such as Python, Javascript, and/or Golang.
  • Ability to quickly learn new technologies and concepts and to understand a wide variety of technical challenges to be solved.
  • Strong analytical and problem-solving skills.
  • Strong written and verbal communication skills, including process documentation, report writing, and presentations.

Preferred qualifications:

  • Participation in cybersecurity or AI workshops, Capture The Flags (AI Village, etc.), or hackathons.
  • Publications in areas of machine learning and/or cyber security (papers, blogs, open source projects, etc.)
  • Experience in writing software and conducting experiments that involve large AI models and datasets
  • Contribution to open source software projects
  • Exposure to offensive security concepts, threat actor behaviors, and threat modeling.
  • Interest in dataset curation, generation, and the ethical implications of AI.
  • Ability to quickly learn new technologies and concepts and to understand a wide variety of technical challenges to be solved.

Peers

View in org chart

A panel showing how The Org can help with contacting the right person.

Open roles at Robust Intelligence