Data Engineer - Data Modeling/ETL
Slack is looking for expert data engineers to join our Data Engineering team. In this role, you will be working cross-functionally with business domain experts, analytics, and engineering teams to design and implement our Data Warehouse model. You will design, implement and scale data pipelines that transform billions of records into actionable data models that enable data insights.
You will lead initiatives to formalize data governance and management practices, rationalize our information lifecycle and key company metrics. You will provide mentorship and hands-on technical support to build trusted and reliable domain-specific datasets and metrics.
The candidate will have deep technical skills, be comfortable contributing to a nascent data ecosystem, and building a strong data foundation for the company. They will be a self-starter, detail and quality oriented, and passionate about having a huge impact at Slack.
What you will be doing
- Translate business requirements into data models that are easy to understand and used by different disciplines across the company. Design, implement and build pipelines that deliver data with measurable quality under the SLA
- Partner with business domain experts, data analysts and engineering teams to build foundational data sets that are trusted, well understood, aligned with business strategy and enable self-service
- Be a champion of the overall strategy for data governance, security, privacy, quality and retention that will satisfy business policies and requirements
- Own and document foundational company metrics with a clear definition and data lineage
- Identify, document and promote best practices
What you should have
- Bachelor's degree in Computer Science, Engineering or related field, or equivalent training, fellowship, or work experience
- 5+ years of experience working in data architecture, data modeling, master data management, metadata management
- A consistent track record of close collaboration with business partners and crafting data solutions to meet their needs
- Very strong experience in scaling and optimizing schemas, performance tuning SQL and ETL pipelines in the OLTP, OLAP and Data Warehouse environments
- Deep understanding of relational as well as NoSQL data stores, methods and approaches (logging, columnar, star and snowflake, dimensional modeling)
- Proficiency with object-oriented and/or functional programming languages is a big plus (e.g. Java, Scala, Python, Go)
- Hands-on experience with Big Data technologies (e.g Hadoop, Hive, Spark)
- Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners
- Excellent understanding of trade-offs
- Demonstrated ability to navigate between big-picture and implementation details
Slack is a layer of the business technology stack that brings together people, data, and applications – a single place where people can effectively work together, find important information, and access hundreds of thousands of critical applications and services to do their best work. From global Fortune 100 companies to corner markets, businesses and teams of all kinds use Slack to bring the right people together with all the right information. Slack is headquartered in San Francisco, CA and has ten offices around the world. For more information on how Slack makes teams better connected, visit slack.com.
Ensuring a diverse and inclusive workplace where we learn from each other is core to Slack’s values. We welcome people of different backgrounds, experiences, abilities and perspectives. We are an equal opportunity employer and a pleasant and supportive place to work.
Come do the best work of your life here at Slack.