*This position is remote from anywhere in the United States*
At Cepheid, we are passionate about improving health care through fast, accurate diagnostic testing. Our mission drives us, every moment of every day, as we develop scalable, groundbreaking solutions to tackle the most complex health challenges. Our associates are involved in every stage of molecular diagnostics, from ideation to development and delivery of testing advancements that improve patient outcomes across a range of settings. As a member of our team, you can make an immediate, measurable impact on a global scale, within an environment that fosters career growth and development.
Cepheid is proud to work alongside a community of six fellow Danaher Diagnostics companies. Together, we’re working at the pace of change on diagnostic tools that address the world’s biggest health challenges, driven by knowing that behind every test there is a patient waiting.
The Sr IT DevOps Engineer – Data Platform will have responsibility for working on a variety of data projects. This includes orchestrating pipelines to stage high quality data for consumption by business analysts, data scientists and visualization developers using modern Big Data tools/architectures. This position requires the candidate to be a phenomenal teammate and results oriented. High sense of integrity, good interpersonal skills, great attention to detail and hands on approach with next generation technologies to contribute to a team that delivers the latest streaming & next generation data technologies!
In this role, you will have the opportunity to:
Be a contributing member of a robust, agile team, passionate about next generation data & analytic technologies. You will build and improve analytic platforms & tools that enable state of the art, next generation Big Data capabilities to analytic users and applications. You will also assist Data Engineering teams during design and development for highly sophisticated and critical data projects. Understanding how to code and integrate Platform solutions into the data analytic ecosystem, will be essential to your success! Other responsibilities will include:
Develop fast prototype solutions by integrating various components and operationalization of data analytic tools for enterprise use.
Be part of teams delivering all data projects including implementing new data technologies for unstructured, streaming and high-volume data.
Developing and deploying distributed computing Big Data applications using frameworks like Apache Spark and Kafka.
Utilizing programming languages like Spark, Python, with an emphasis in tuning, optimization and best practices for Data Engineers.
Maximizing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of end user capabilities
Developing data solutions on cloud deployments such as AWS
Industry experience in Financial Services or other regulated industries and compliance and audit practice.
Hands on experience leading delivery through Agile methodologies
Management practices by adhering to required standards and processes
Works with Architects to craft sophisticated solutions and lead from inception to production
Builds and maintains DevOps processes, application infrastructure, and utilizes cloud services (including systems, network and other artifacts)
The essential requirements of the job include:
A Bachelor’s degree in the areas of Computer Science, Engineering, Information Systems, Business, or equivalent field of study, as well as 10+ years proven track record in DevOps Engineer or similar work experience. Additionally, we seek a candidate with at least 7 years working with data solutions (Data lakes, cloud data warehouses). Additional requirements include:
3+ years developing data solutions on AWS
2+ years Linux/Unix including basic commands, shell scripting and solution engineering
2+ years with S3, AWS Glue, EMR, MKS or confluent Kafka, RDS, Serverless technologies, DynamoDB, lambda, Kubernetes
2+ years supporting ETL/ELT tools like Matillion or equivalent, PowerBI services and any semantic layer software like AtScale.
2+ years with AD and Gateway servers is highly preferred.
Experience developing software solutions to build out capabilities on a Big Data and other Enterprise Data Platforms
Experience with solving problems and delivering high quality results in a fast paced environment
Expertise in coding in Python, Spark, Java with emphasis on tuning / optimization
Experience working with automated build and continuous integration systems (Chef, Jenkins, Docker, GitHub )
AWS certification preferred but not required prior to hiring.
When you join us, you’ll also be joining Danaher’s global organization, where 69,000 people wake up every day determined to help our customers win. As an associate, you’ll try new things, work hard, and advance your skills with guidance from dedicated leaders, all with the support of powerful tools and the stability of a tested organization.
Danaher Corporation and all Danaher Companies are committed to equal opportunity regardless of race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity, or other characteristics protected by law. We value diversity and the existence of similarities and differences, both visible and not, found in our workforce, workplace and throughout the markets we serve. Our associates, customers and shareholders contribute unique and different perspectives as a result of these diverse attributes.
The EEO posters are available .
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us at to request accommodation.
If you’ve ever wondered what’s within you, there’s no better time to find out.
At Cepheid, we are passionate about improving health care through fast, accurate diagnostic testing. Our mission drives us, every moment of every day, as we develop scalable, groundbreaking solutions to tackle the most complex health challenges. Our associates are involved in every stage of molecular diagnostics, from ideation to development and delivery of testing advancements that improve patient outcomes across a range of settings. As a member of our team, you can make an immediate, measurable impact on a global scale, within an environment that fosters career growth and development.
Cepheid is proud to work alongside a community of six fellow Danaher Diagnostics companies. Together, we’re working at the pace of change on diagnostic tools that address the world’s biggest health challenges, driven by knowing that behind every test there is a patient waiting.
The Sr IT DevOps Engineer – Data Platform will have responsibility for working on a variety of data projects. This includes orchestrating pipelines to stage high quality data for consumption by business analysts, data scientists and visualization developers using modern Big Data tools/architectures. This position requires the candidate to be a phenomenal teammate and results oriented. High sense of integrity, good interpersonal skills, great attention to detail and hands on approach with next generation technologies to contribute to a team that delivers the latest streaming & next generation data technologies!
In this role, you will have the opportunity to:
Be a contributing member of a robust, agile team, passionate about next generation data & analytic technologies. You will build and improve analytic platforms & tools that enable state of the art, next generation Big Data capabilities to analytic users and applications. You will also assist Data Engineering teams during design and development for highly sophisticated and critical data projects. Understanding how to code and integrate Platform solutions into the data analytic ecosystem, will be essential to your success! Other responsibilities will include:
Develop fast prototype solutions by integrating various components and operationalization of data analytic tools for enterprise use.
Be part of teams delivering all data projects including implementing new data technologies for unstructured, streaming and high-volume data.
Developing and deploying distributed computing Big Data applications using frameworks like Apache Spark and Kafka.
Utilizing programming languages like Spark, Python, with an emphasis in tuning, optimization and best practices for Data Engineers.
Maximizing DevOps techniques and practices like Continuous Integration, Continuous Deployment, Test Automation, Build Automation and Test-Driven Development to enable the rapid delivery of end user capabilities
Developing data solutions on cloud deployments such as AWS
Industry experience in Financial Services or other regulated industries and compliance and audit practice.
Hands on experience leading delivery through Agile methodologies
Management practices by adhering to required standards and processes
Works with Architects to craft sophisticated solutions and lead from inception to production
Builds and maintains DevOps processes, application infrastructure, and utilizes cloud services (including systems, network and other artifacts)
The essential requirements of the job include:
A Bachelor’s degree in the areas of Computer Science, Engineering, Information Systems, Business, or equivalent field of study, as well as 10+ years proven track record in DevOps Engineer or similar work experience. Additionally, we seek a candidate with at least 7 years working with data solutions (Data lakes, cloud data warehouses). Additional requirements include:
3+ years developing data solutions on AWS
2+ years Linux/Unix including basic commands, shell scripting and solution engineering
2+ years with S3, AWS Glue, EMR, MKS or confluent Kafka, RDS, Serverless technologies, DynamoDB, lambda, Kubernetes
2+ years supporting ETL/ELT tools like Matillion or equivalent, PowerBI services and any semantic layer software like AtScale.
2+ years with AD and Gateway servers is highly preferred.
Experience developing software solutions to build out capabilities on a Big Data and other Enterprise Data Platforms
Experience with solving problems and delivering high quality results in a fast paced environment
Expertise in coding in Python, Spark, Java with emphasis on tuning / optimization
Experience working with automated build and continuous integration systems (Chef, Jenkins, Docker, GitHub )
AWS certification preferred but not required prior to hiring.
When you join us, you’ll also be joining Danaher’s global organization, where 69,000 people wake up every day determined to help our customers win. As an associate, you’ll try new things, work hard, and advance your skills with guidance from dedicated leaders, all with the support of powerful tools and the stability of a tested organization.
Danaher Corporation and all Danaher Companies are committed to equal opportunity regardless of race, color, national origin, religion, sex, age, marital status, disability, veteran status, sexual orientation, gender identity, or other characteristics protected by law. We value diversity and the existence of similarities and differences, both visible and not, found in our workforce, workplace and throughout the markets we serve. Our associates, customers and shareholders contribute unique and different perspectives as a result of these diverse attributes.
The EEO posters are available .
We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform crucial job functions, and to receive other benefits and privileges of employment. Please contact us at to request accommodation.
If you’ve ever wondered what’s within you, there’s no better time to find out.