Lingokids is a well-funded, fast-growing edtech startup changing the face of early childhood language education. We’ve taught over two million kids in 170 countries since our launch last year.
At Lingokids, we encourage candidates of all different backgrounds and identities to apply. We actively seek to hire individuals with different perspectives, and we are eager to continue diversifying our company's culture. We strive to be an inclusive, supportive place where you can do the best work of your career.
Our offices are in Madrid, Spain, but the position is open to anyone with experience working remotely. Some overlap with Central European Time is expected though, for those times when a video call is the most efficient way to communicate.
About the job:
Our Data Platform is what supports the Analytics needs and data-driven projects of the company. It is built around a modern ELT pipeline deployed on AWS and uses Python, Spark, dbt, Redshift, and a range of AWS services.
We are looking for mid or senior Data Engineers to join our Data Engineering Squad, where you will support, improve and extend our Data Platform. Your work will empower our Analysts and Data Scientists, enable data-driven Product features, and generally support the data-informed culture that exists at Lingokids.
More specifically, based on your experience you will:
- Design, build and support modern and scalable data pipelines using 3rd party platforms or internal solutions.
- Collaborate with data scientists, other engineers, and stakeholders to understand what data is required and how best to make it available in our Data Platform.
- Support the daily work of our Data Scientists by ensuring they have easy access to data and tools (development environment, notebook instances, etc.).
- Write and maintain code to orchestrate our ELT workflows.
- Improve the design and data feeding of our Data Lake.
- Help and train Data Scientists to optimize SQL queries for performance.
- Provide data and infrastructure for building and deploying ML models to production.
- Use best practices around CI/CD, automation, testing, and monitoring of analytics pipelines (inspired by DataOps).
The ideal teammate for us would be someone who believes that communication, empathy, inquisitiveness, and open-mindedness are fundamental to being successful in any endeavor.
Ideally, you should have some or all of the following:
- Be interested in building a platform that enables our Data Analysts and Data Scientists.
- Be fluent with one or more high-level programming languages (Python, Ruby, Java, Scala, or similar).
- Be comfortable with analytical SQL.
- Be familiar with software development best practices and their applications to Analytics (version control, testing, CI/CD, automation).
- Have experience building modern ETL pipelines, possibly at a large scale.
- Have experience working with a modern data warehouse (Redshift, Snowflake, BigQuery, or similar).
- Have a self-driven approach to learning new technologies and moving projects forward.
We also consider these as nice to have:
- Experience working with Data Scientists and Analysts.
- An interest in data quality and governance.
- Experience in the BigData ecosystem (Hadoop, Spark, PrestoDB, …).
- Familiarity with infrastructure and automation tools (Terraform, Ansible, or similar).
- English is a must. We are a multicultural team, and we are providing a service in English so, we don't care about certificates, but we expect you to be able to communicate fluently.
You should feel equally comfortable communicating in long-form writing. Given the circumstances, we have become a fully remote company. We are firm believers that being articulate in both spoken and written long-form asynchronous communication is key to working efficiently together.
If you think you don't tick all the boxes, we'd still love to hear from you. Nobody checks every box, and we are looking for someone excited to join the team.