Senior Engineer Data, Pipeline Team
Paylocity Zobrazit všechny práce
- Česko
- Trvalý pracovní poměr
- Plný úvazek
Join our Pipeline Team as a Senior Data Engineer - a role designed for a software engineer who thrives at the intersection of high-scale data and distributed systems. You won't just be writing ETL; you will be engineering the high-performance delivery systems that power our customer-facing data products and serve as the governed source for our future Agentic AI initiatives. We are looking for a candidate who treats data as a product and infrastructure as code, applying rigorous software engineering principles (CI/CD, modular design, and automated testing) to solve complex puzzles in Snowflake optimization and native data-tier security.Primary Responsibilities
The below represents the primary duties of the position, others may be assigned as needed. To perform this job successfully, an individual must be able to perform each essential duty satisfactorily. The requirements listed below are representative of the knowledge, skill, and/or ability required. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions.
- Architect, implement, and manage complex data pipelines and storage solutions that deliver high-fidelity data directly to customer-facing applications and AI-ready endpoints.
- Optimize data workflows for performance, scalability, and reliability, specifically solving the challenge of maintaining low-latency refresh rates while ensuring strict Snowflake cost-efficiency.
- Design and deploy scalable architectural frameworks and shared platforms to solve foundational needs, such as implementing native Attribute-Based Access Control (ABAC) security models or automated governance tooling.
- Deconstruct and solve multi-layered technical challenges in an effective and systematic manner, applying organized analysis and methodical execution to resolve intricate data engineering problems.
- Lead projects related to modern integration, developing and extending internal tooling like reusable dbt packages, custom macros, and automated data-quality guardrails.
- Mentor junior data engineers and enforce software engineering best practices, including modular design patterns, rigorous code reviews, and comprehensive documentation.
- Work closely with stakeholders to define data strategy and solutions that natively support the broader ecosystem of modern metadata, governance, and AI-enabled workflows.
- Bachelor's degree in a technical field (Computer Science, Engineering, or related). Master's preferred.
- 5+ years of experience in data engineering or software development, with a proven track record of building production-grade distributed systems.
- Expertise with DBT and Snowflake, including a deep understanding of Snowflake internals, performance tuning, and cost-optimization strategies.
- Strong knowledge of software development patterns, applying Python and “Infrastructure as Code” principles to the data domain (DRY, testing, modularity).
- Expertise in big data streaming technologies and event-driven architectures (Kafka, AWS Kinesis, Eventbridge).
- Experience in data modeling, real-time processing, and implementing complex, native security and access control models at scale.
- Ability to sit for extended periods: The role requires sitting at a desk or workstation for long periods, typically 7-8 hours a day.
- Use of computer and phone systems: The employee must be able to operate a computer, use phone systems, and type. This includes using multiple software programs and inquiries simultaneously.