Press ESC to close

Data Ops Senior Data Engineer


Published: Thu, 14 Nov 2024 04:50:38 GMT

Position: DataOps Senior Data Engineer at Fetch

About Us:

Fetch is the leading rewards app in America, enabling millions of users to earn rewards for their everyday purchases and activities. To date, we have delivered over $1 billion in rewards and earned more than 5 million five-star reviews from satisfied users. With investments from SoftBank, Univision, and Hamilton Lane, and partnerships with both challenger brands and Fortune 500 companies, Fetch is revolutionizing the way brands and consumers connect in the marketplace. As a Fetch employee, you will play a vital role in a platform that promotes brand loyalty and creates lifelong consumers through the power of Fetch points. Our focus on user and partner success extends to our employees, and we are proud to be recognized by Forbes as one of America’s Best Startup Employers for two consecutive years. At Fetch, we foster a people-first culture grounded in trust, accountability, and innovation. We encourage our employees to challenge ideas, think big, and have fun while doing it. Fetch is an equal employment opportunity employer.

The Role:

Fetch is seeking a DataOps Senior Data Engineer to join our growing data team and drive the design and construction of scalable and efficient data pipelines and transformation systems. In this role, you will be responsible for processing terabytes of data daily to support Fetch’s business needs. The ideal candidate will collaborate with cross-functional teams to establish a robust data governance structure, ensuring efficient, secure, and compliant data management. As a crucial member of our team, your work will help drive Fetch’s success in creating a world-class data availability platform.

Responsibilities:

– Design and implement both real-time and batch data processing pipelines using technologies like Apache Kafka, Apache Flink, or managed cloud streaming services for scalability and resilience.
– Develop data pipelines that efficiently process terabytes of data daily, utilizing data lakes and data warehouses within the AWS cloud. Proficiency in technologies such as Apache Spark is essential for handling large-scale data processing.
– Establish robust schema management practices and lay the foundation for future data contracts. Ensure pipeline integrity by enforcing data quality checks, improving overall data reliability and consistency.
– Create tools to support the rapid development of data products and recommend patterns for data pipeline deployments.
– Design, implement, and maintain data governance frameworks and best practices to ensure data quality, security, compliance, and accessibility throughout the organization.
– Mentor and guide junior engineers, fostering their growth in best practices and efficient development processes.
– Collaborate with the DevOps team to integrate data needs into DevOps tooling.
– Promote a culture of collaboration, automation, and continuous improvement in data engineering processes, championing DataOps practices within the organization.
– Stay updated on emerging technologies, tools, and trends in data processing and analytics, evaluating their potential impact and relevance to Fetch’s strategy.

Requirements:

– Self-starter with the ability to take a project from architecture to adoption.
– Experience with Infrastructure as Code tools such as Terraform or CloudFormation, and the ability to automate the deployment and management of data infrastructure.
– Familiarity with Continuous Integration and Continuous Deployment (CI/CD) processes, with experience setting up and maintaining CI/CD pipelines for data applications.
– Proficiency in software development lifecycle processes, releasing fast and improving incrementally.
– Experience with tools and frameworks for ensuring data quality, such as data validation, anomaly detection, and monitoring. Ability to design systems to track and enforce data quality standards.
– Proven experience in designing, building, and maintaining scalable data pipelines capable of processing terabytes of data daily using modern data processing frameworks (e.g., Apache Spark, Apache Kafka, Flink, Open Table Formats, modern OLAP databases).
– Strong understanding of data architecture principles and the ability to evaluate emerging technologies.
– Proficiency in at least one modern programming language (Go, Python, Java, Rust) and SQL.
– Comfortable presenting and challenging technical decisions in a peer review environment.
– Undergraduate or graduate degree in relevant fields such as Computer Science, Data Science, or Business Analytics.

Benefits:

At Fetch, we value the well-being and growth of our employees. We offer the following benefits:

– Equity for all employees.
– 401k matching program: Dollar-for-dollar match up to 4%.
– Comprehensive medical, dental, and vision plans for all employees, including coverage for pets.
– Continuing education reimbursement of up to $10,000 per year.
– Employee Resource Groups focused on promoting diversity and inclusion in the workplace.
– Flexible paid time off, including 9 paid holidays and a week-long break at the end of the year.
– Robust leave policies, including 20 weeks of paid parental leave for primary caregivers, 14 weeks for secondary caregivers, and a flexible return to work schedule. We also offer a $2,000 baby bonus.
Apply link

@Katen on Instagram
This error message is only visible to WordPress admins

Error: No feed with the ID 1 found.

Please go to the Instagram Feed settings page to create a feed.