Job Description

Shipt is a membership-based marketplace that helps people get the things they need. Our friendly shoppers handpick fresh groceries and household essentials and deliver them to members in as soon as one hour.

Shipt has a wide variety of data and we’re looking for help in consolidating our pipelines and expanding our capabilities. We currently work with data across Postgres, Redshift, flat files, and APIs (e.g., Google Analytics), but we’re actively exploring new data processing technologies and opportunities.

On a daily basis, you’ll focus on developing processes and systems that ingest, clean, and normalize a wide variety of data sources into valuable data sets. You’ll be reporting to the Head of Data Engineering and will primarily work with the Data Science and Engineering teams.

Your Responsibilities

  • Maintain and develop processes responsible for ingesting and cleaning large amounts of data from various sources
  • Enable the faster consumption and understanding of data, both internally and externally
  • Improve the fidelity of existing data sets used in our product data management system
  • Build upon existing quality control and quality assurance practices
  • Collaborate with Data Science & Analytics to drive data enrichment and the democratization of data across the company


  • 3+ years of ETL experience
  • An expert level understanding of databases
  • Scripting experience (e.g., Python, Ruby, PHP) is required (we currently use Python and Ruby for parts of data pipelines)
  • Keen attention to detail
  • An understanding of large-scale data processing frameworks (e.g., Spark, Flink, or many others) is a major plus
  • A Bachelor’s Degree in CS, Information Systems, or a related field

We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.