Apply

Senior Data Engineer - Data Platform

Slack is looking for an experienced Data Engineer to join our team. You will build scalable backend services and tools to help partners implement, deploy and analyze data assets with a high level of autonomy and limited friction. You will play a meaningful role in making partner interactions with the Data Warehouse pleasant and productive (Analytics, Business Intelligence, Application Engineering, Machine Learning and IT).

As Slack’s data grows (along with the number of customers, features and employees), the goal of the Data Platform team is to strengthen the efficiency and dependability of the way we make decisions. You will design and build abstractions that hide the complexity of the underlying Big Data stack (Hadoop, Hive, Spark, Presto, Kafka, Parquet, Airflow, EMR, S3, etc) and allow partners to focus on their strengths: data modeling, data analysis, search or machine learning.

You will have deep technical skills, be comfortable contributing to a nascent data ecosystem and building a strong data foundation for the company. You will be a self-starter, detail and quality oriented, and passionate about having a huge impact at Slack.

Responsibilities:

  • Optimize the end-to-end workflow of data users at Slack (from crafting libraries to providing abstractions used to define jobs, schedule data pipelines or access data assets).
  • Provide visibility into our data flows (comprehensive view of sources, transformations, sinks, data lineage).
  • Automate and handle the lifecycle of data sets (schema evolution, metadata store, change and backfill management, deprecation and migration).
  • Streamline the creation of new data sets with accessible frameworks and Domain Specific Languages (DSL).
  • Improve the data quality and reliability of the pipelines (monitoring and failure detection).
  • Supply reusable backend abstractions to ingest or access data sets (batch or low latency APIs).

Requirements:

  • Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience.
  • 5+ years of experience working with Big Data technologies (e.g. Hadoop, Hive, Spark, Presto, Kafka, etc).
  • Deep understanding of polyglot data persistence (relational, key/value, document, column, graph).
  • Skilled at crafting and building robust backend data services (distributed systems, concurrency models, microservices).
  • Strong dedication to code quality, automation and operational excellence: unit/integration tests, scripts, workflows.
  • Expertise in object-oriented and/or functional programming languages (e.g. Java/Scala, Python).
  • Excellent written and verbal communication and social skills; able to effectively collaborate with partners.

Slack is the collaboration hub of choice for companies of all sizes, all across the world. By using Slack, they ensure that the right people are always in the loop, that key information is always at their fingertips, and new team members can get up to speed easily. With Slack, teams are better connected.

Launched in February 2014, Slack is the fastest growing business application ever and is used by thousands of teams and millions of users every day. We currently have nine offices worldwide, in San Francisco, Vancouver, Dublin, Melbourne, New York, London, Tokyo, Toronto and Denver.

Ensuring a diverse and inclusive workplace where we learn from each other is core to Slack's values. We welcome people of different backgrounds, experiences, abilities and perspectives. We are an equal opportunity employer and a pleasant and supportive place to work. Pursuant to the San Francisco Fair Chance Ordinance, we will consider for employment qualified applicants with arrest and conviction records.

Come do the best work of your life here at Slack.