Kubernetes - Big Data Admin Experis Temple Terrace, FL Contractor

Kate

Administrator
Команда форума
Position: Kubernetes - Big Data Administrator

Duration: Long-term

Location: Temple Terrace, FL

JOB DUTIES:


  • Responsible for implementation and ongoing administration of K8 and Hadoop infrastructure initiatives.
  • Experience in creating and managing production scale Kubernetes clusters, Deep understanding of Kubernetes networking.
  • Support our Kubernetes based projects to resolve critical and complex technical issue.
  • Performing application deployments on Kubernetes cluster, Securely managing kubernetes Cluster on at least one of the cloud providers and Kubernetes core concepts
  • Deployment, ReplicaSet, DaemonSet, Statefulsets, Jobs.
  • Managing Kubernetes secrets.
  • Ingress Controllers (Nginx, Istio etc) and cloud native load balancers, Managing Kubernetes storages (PV, PVC, Storage Classes, Provisioners).
  • Kubernetes Networking ( Services, Endpoints, DNS, LoadBalancers).
  • Aligning with the systems engineering team to propose and deploy new hardware and software environments required for Hadoop and to expand existing environments.
  • Working with data delivery teams to setup new Hadoop users. This job includes setting up Linux users, setting up Kerberos principals and testing HDFS, Hive, Pig, Spark and MapReduce access for the new users.
  • Cluster maintenance as well as creation and removal of nodes using Hadoop Management Admin tools like Ambari, Cloudera Manger etc.
  • Sound knowledge in Ranger, Nifi, Kafka, Atlas, Hive, Storm, pig, spark, Elastic Search, Splunk, Solr, Kyvos, Hbase etc and other bigdata tools.
  • Performance tuning of Hadoop clusters and Hadoop MapReduce routines.
  • Screen Hadoop cluster job performances and capacity planning
  • Monitor Hadoop cluster connectivity and security
  • Manage and review Hadoop log files, File system management and monitoring.
  • HDFS support and maintenance.
  • Diligently teaming with the infrastructure, network, database, application and business intelligence teams to guarantee high data quality and availability.
  • Collaborating with application teams to install operating system and Hadoop updates, patches, version upgrades when required.
  • Implement automation tools and frameworks (CI/CD pipelines). Knowledge on Ansible, Jenkins, Jira, Artifactory, Git etc.
  • Design, develop, and implement software integrations based on user feedback.
  • Troubleshoot production issues and coordinate with the development team to streamline code deployment.
  • Analyze code and communicate detailed reviews to development teams to ensure a marked improvement in applications and the timely completion of projects.
  • Collaborate with team members to improve the company's engineering tools, systems and procedures, and data security.
MUST HAVE SKILLS:

  • Must know Hadoop and bigdata infrastructure.
  • Certifications such as CKA (Certified Kubernetes Administrator), CKAD (Certified Kubernetes Application Developer).
  • Experience in creating and managing production scale Kubernetes clusters, Deep understanding of Kubernetes networking.
  • Expert in Hadoop administration with knowledge of Hortonworks/Cloudera or Mapr Bigdata management tools
  • Expert in developing/managing Java and Web applications.
  • Expert in implementing and trouble shooting hive, spark, pig, storm, Kafka, Nifi, Atlas, Kyvos, Elastic Search, Solr, Splunk, HBase applications.
  • Possess a strong command of software-automation production systems (Jenkins and Selenium) and code deployment tools (Puppet, Ansible, and Chef).
  • Working knowledge of Ruby or Python and known DevOps tools like Git and GitHub.
  • Working knowledge of database (Oracle/Teradata) and SQL (Structured Query Language).
  • General operational expertise such as good troubleshooting skills, understanding of system's capacity, bottlenecks, basics of memory, CPU, OS, storage, and network
DESIRED SKILLS:

  • The most essential requirements are: They should be able to deploy K8 clustrer and Perform application deployments on Kubernetes cluster, monitor critical parts of the cluster, configure high availability, schedule and configure it and take backups.
  • Good knowledge of Linux as Hadoop runs on Linux.
  • Familiarity with open source configuration management and deployment tools such as Puppet or Chef and Linux scripting.
  • Knowledge of Troubleshooting Core Java Applications is a plus.
  • Problem-solving skills.
  • A methodical and logical approach.
  • The ability to plan work and meet deadlines
  • Accuracy and attention to detail
EDUCATION/CERTIFICATIONS:

  • B.S. or equivalent Engineering degree with at least 4 Years for Kubernetes work experience

Recommended Skills​

Map Reduce

Administration

Apache Hive

Storage (Computing)

Ambari

Jenkins
 
Сверху