Lead Data & Analytics Engineer

Engineering · Full-time · State of Mexico, Mexico · Remote possible

Job description

About Brilliant

Brilliant is making a world of great problem solvers. We focus on adults learning quantitative skills – especially in math, data, and CS/AI – and deliver a best-in-class interactive learning experience across web and apps. Our courses teach you what you need to know, while skipping the stuff you don’t – so expect more about solving equations, statistical analysis, logical deduction, neural networks, and generative AI, and less about abstract theorems and integrating complicated trig functions. 

We serve hundreds of thousands of paid subscribers, and we’re hoping you might be the right person to contribute to accelerating our footprint to millions of customers (and changed lives). In addition to what’s below, you can see all open roles and learn more about our team culture on our careers page.

We have always prioritized building a healthy business as the backbone of achieving our mission. We are default alive (will be profitable before needing to raise), have never had layoffs, are growing new customers at an exciting pace (high double-digits year-over-year). Our investors are top-tier + mission aligned, and we’ve kept our valuations tethered to reality – we aren’t playing “catch up” like many others.

In our day-to-day, we value adventure, excellence, generosity, and candor. We are optimists in the face of uncertainty, we take pride in our work, we go the extra mile for each other, and we tell it like it is (the good and the bad). We’re all here to do the best work of our lives together, and have a lot of fun along the way. 

We believe that real-time collaboration and human connection are necessary ingredients in building a high-velocity, creatively-oriented consumer product. We maintain core hours (10am - 3pm Pacific) where everyone is online, regardless of timezone. Over half of us are located near our hubs in SF and NYC, and folks outside of those cities travel to attend team offsites once-per-quarter.

The Role

In this high-autonomy position, you'll direct the development and maintenance of our data infrastructure and data developer experiences for the benefit of the Data and Engineering teams. You’ll collaborate closely with a team of 4 data scientists and 5 engineering managers across an engineering team of 40. Your work will be among the most highly leveraged in the company.

You’ll build and extend modern data infrastructure built around dbt (including dbt Cloud) and Snowflake, with supporting tools like Fivetran, Census, and Amplitude.

Responsibilities

  • Design efficient and scalable data pipeline architecture for collecting data across a variety of sources, enabling different functions to leverage transformed data for analytics and operations.
  • Improve existing data modeling and deployment practices, fostering best practices to make the team more efficient and improve data quality.
  • Collaborate with engineers, product managers, and data scientists to understand data needs, overseeing end-to-end event instrumentation for new features, including naming conventions and properties.
  • Drive data “operationalization” – ensuring that we’re sending the right data to the right tools and services, on time and under cost (such as by management of tools like Census).
  • Ensure consistent pipeline performance when it comes to latency and error-handling.
  • Optimize the entire data stack — from data storage to transformation to analytical tooling — from a performance, cost, and scalability standpoint.
  • Lead us into a future of convenient data governance, by selecting ideal CDP and supporting tools.

Who are you?

  • Experienced: You bring at least 5 years of software engineering experience, including at least 2 years of working directly with some part of the “modern” data stack (dbt core & cloud, Fivetran, Snowflake, or equivalents).
  • Empathy for both worlds: You’ve worked closely enough with software engineering teams to understand their concerns and have also walked in the shoes of a data scientist.
  • Technically proficient: You possess advanced SQL skills and solid Python skills, and you’ve directly built or managed live systems which involved reliance on third-party tools.
  • A builder: You’re enthusiastic about establishing the foundations of a data team and their tools from scratch.

What might you tackle in the first 90 days?

  • Audit our data infrastructure from top to bottom to proactively identify performance, scale, and complexity considerations.
  • Audit our data stack (e.g. Snowflake, Fivetran, dbt, Census, Amplitude, Avo) to ensure conformity to best practices.
  • Audit existing ELT process for business critical data models and recommend ways to improve data quality, integrity, and reliability.
  • Review and extend data observability, monitoring, and alerting — with a deep empathy for how data issues could adversely affect the end user experience.
  • Determine priority (and vendor/OSS selection) for data governance tooling.
  • Determine priority and general implementation approach for supporting managed business metrics throughout dbt and related tools.

View in org chart

Open roles at Brilliant.org

Two candidates
The Org
helps you hire
great candidates
It takes less than ten minutes to set up your company page.
It’s free to use - try it out today.