At CrowdStrike we�re on a mission - to stop breaches. Our groundbreaking technology, services delivery, and intelligence gathering together with our innovations in machine learning and behavioral-based detection, allow our customers to not only defend themselves, but do so in a future-proof manner. Because of that we�ve earned numerous honors and top rankings for our technology, organization and talent. Our culture was purpose-built to be remote first, and we offer flexible work arrangements to help our people manage their personal and professional lives in a way that works for them. If you�re ready to work on unrivaled technology with a team that makes a difference every day, let�s talk. About The Role: CrowdStrike is looking to hire a Senior DevOps Engineer to join the Data Infrastructure team. In this team, we are on a mission to create a hyper scale data lake, which helps finding bad actors and stopping breaches. The team builds and operates systems to centralize all of the data the falcon platform collects, making it easy for internal and external customers to transform and access the data for analytics, machine learning, and threat hunting. As a Systems Engineer in this team you will be responsible for building our Hadoop ecosystem in our DataCenter, including HDFS, YARN, Hadoop Cluster Management, Spark, Hive, Presto, Monitoring and Logging. We are looking for candidates that have setup Petabyte scale Hadoop clusters in the DataCenter and are passionate about solving problems at high scale. This role involves training and mentoring junior engineers. Responsibilities:
- Perform and oversee a variety of functions to ensure that our Hadoop infrastructure is available, reliable, stable and secure
- Applies judgment in analyzing and selecting technologies, installation and maintenance of software and hardware systems that allow multiple Engineering and DataScience teams to interact with this system
- Manage Deployment, Monitoring and Alerting dashboards with the goal of setting up an automated DevOps model
- Has responsibility for building and maintaining the Hadoop infrastructure such that software engineers, data analysts and data scientists can run jobs to gather data and insights
- Hadoop certification is desirable
- Understanding of Apache Hadoop ecosystem technologies (HDFS, YARN, Apache Spark, Kafka, Zookeeper, Hive, Ranger, Presto)
- Experience with large-scale business critical platforms (primarily Linux), server-grade machines, shell scripting
- Solid understanding of databases (SQL, NoSQL, Hive, Cassandra, In-Memory databases like Redis)
- Prior experience with Grafana, Prometheus is desirable
- Familiarity with Terraform and Chef is strongly preferred
- Proficiency in some scripting language (Python and/or shell scripting)
- Proven ability to work with both local and remote teams
- Strong communication skills both verbal and written
- Bachelor�s degree in an applicable field, such as CS, CIS or Engineering
- Remote-friendly culture
- Market leader in compensation and equity awards
- Competitive vacation and flexible working arrangements
- Comprehensive health benefits + 401k plan
- Paid Parental Leave, including adoption
- Wellness programs
- A variety of professional development and mentorship opportunities
- Open offices have stocked kitchens, coffee, soda and treats