DevOps Engineers:
Project Scope:
Needing to build out a DevOps team in order to continue the buildout of the AWS environment as well as deploy a COTS solution to the enterprise
AWS Engineers
Will be migrating a large amount of data to the cloud
Needing experience with IAM, Security policies, etc
CI/CD pipeline experience needing experience in building these pipelines currently have a pipeline that exists they need to build on the foundation of what already exists
Needing Jenkins and Python experience
Must be well versed in Python
Since the product being worked with is a COTS product there will be no build stage they will go directly to deployment
Artifactory experience is nice to have
Terraform is must have
All of the managed services with AWS
Currently working with Docker images will use ECS or EKS for this
Slack, New Relic, Cloud Watch these are the current monitoring tools in operation
Must Haves:
In depth AWS experience migration of infrastructure and data to cloud
Terraform Infrastructure as Code
Docker experience
Python scripting
CI/CD build experience Jenkins is preferred
Nice to Haves:
Slack, New Relic, CloudWatch heavy monitoring experience
Experience deploying a COTS solution to the cloud
Required Skills : Primary: Python, Ab Initio, AWS Secondary: Spark, Scala/Java, Oracle PL/SQL -Main requirements: building pipelines using Spark and Scala with Oracle experience Data Track: -current persistence layer is oracle 11g, want to migrate to 19c -then they plan to go to RDS -today they use goldengate for replication between 2 regions -when it goes to cloud will use RDS -All of the pipelines are Spark and Scala pipelines -will use existing pipelines as foundation -Spark Scala with python -once our pipeline is built, there is a lot of integration required with our Tuning infrastructure -Tuning is our tokenization platform: tokenize and detokenize sensitive data -the pipeline will make a call to the Tuning service, wile it is migrating the data from the source to target, then the pipeline is migrating the data it will make a touring call. -not expecting data engineers that have migration experience, but it would be nice to have. -will be a real time pipeline, will need to do the change data capture, incremental data also needs to be migrated / one time activity will be 16TB of data plus any new data as part of the migration
Basic Qualification :
Additional Skills :
Background Check :Yes
Notes :Upon offer, it is required to show consultants W2 take home pay in the Exh A. There is a 25% markup requirement on all candidates submitted. Vendor s markup should not exceed 25% of the candidate s take home $$$. Vendor must disclose candidate s W2 hourly $$$ in the comments section when submitting in iLabor.
Selling points for candidate :
Project Verification Info rimary: Python, Ab Initio, AWS Secondary: Spark, Scala/Java, Oracle PL/SQL -Main requirements: building pipelines using Spark and Scala with Oracle experience Data Track: -current persistence layer is oracle 11g, want to migrate to 19c -then they plan to go to RDS -today they use goldengate for replication between 2 regions -when it goes to cloud will use RDS -All of the pipelines are Spark and Scala pipelines -will use existing pipelines as foundation -Spark Scala with python -once our pipeline is built, there is a lot of integration required with our Tuning infrastructure -Tuning is our tokenization platform: tokenize and detokenize sensitive data -the pipeline will make a call to the Tuning service, wile it is migrating the data from the source to target, then the pipeline is migrating the data it will make a touring call. -not expecting data engineers that have migration experience, but it would be nice to have. -will be a real time pipeline, will need to do the change data capture, incremental data also needs to be migrated / one time activity will be 16TB of data plus any new data as part of the migration
Exclusive to Apex :Yes
Face to face interview required :No
Candidate must be local :No
Candidate must be authorized to work without sponsorship ::No
Interview times set :Yes
Type of project evelopment/Engineering
Master Job Title evOps: Applications
Branch Code :Richmond
Project Scope:
Needing to build out a DevOps team in order to continue the buildout of the AWS environment as well as deploy a COTS solution to the enterprise
AWS Engineers
Will be migrating a large amount of data to the cloud
Needing experience with IAM, Security policies, etc
CI/CD pipeline experience needing experience in building these pipelines currently have a pipeline that exists they need to build on the foundation of what already exists
Needing Jenkins and Python experience
Must be well versed in Python
Since the product being worked with is a COTS product there will be no build stage they will go directly to deployment
Artifactory experience is nice to have
Terraform is must have
All of the managed services with AWS
Currently working with Docker images will use ECS or EKS for this
Slack, New Relic, Cloud Watch these are the current monitoring tools in operation
Must Haves:
In depth AWS experience migration of infrastructure and data to cloud
Terraform Infrastructure as Code
Docker experience
Python scripting
CI/CD build experience Jenkins is preferred
Nice to Haves:
Slack, New Relic, CloudWatch heavy monitoring experience
Experience deploying a COTS solution to the cloud
Required Skills : Primary: Python, Ab Initio, AWS Secondary: Spark, Scala/Java, Oracle PL/SQL -Main requirements: building pipelines using Spark and Scala with Oracle experience Data Track: -current persistence layer is oracle 11g, want to migrate to 19c -then they plan to go to RDS -today they use goldengate for replication between 2 regions -when it goes to cloud will use RDS -All of the pipelines are Spark and Scala pipelines -will use existing pipelines as foundation -Spark Scala with python -once our pipeline is built, there is a lot of integration required with our Tuning infrastructure -Tuning is our tokenization platform: tokenize and detokenize sensitive data -the pipeline will make a call to the Tuning service, wile it is migrating the data from the source to target, then the pipeline is migrating the data it will make a touring call. -not expecting data engineers that have migration experience, but it would be nice to have. -will be a real time pipeline, will need to do the change data capture, incremental data also needs to be migrated / one time activity will be 16TB of data plus any new data as part of the migration
Basic Qualification :
Additional Skills :
Background Check :Yes
Notes :Upon offer, it is required to show consultants W2 take home pay in the Exh A. There is a 25% markup requirement on all candidates submitted. Vendor s markup should not exceed 25% of the candidate s take home $$$. Vendor must disclose candidate s W2 hourly $$$ in the comments section when submitting in iLabor.
Selling points for candidate :
Project Verification Info rimary: Python, Ab Initio, AWS Secondary: Spark, Scala/Java, Oracle PL/SQL -Main requirements: building pipelines using Spark and Scala with Oracle experience Data Track: -current persistence layer is oracle 11g, want to migrate to 19c -then they plan to go to RDS -today they use goldengate for replication between 2 regions -when it goes to cloud will use RDS -All of the pipelines are Spark and Scala pipelines -will use existing pipelines as foundation -Spark Scala with python -once our pipeline is built, there is a lot of integration required with our Tuning infrastructure -Tuning is our tokenization platform: tokenize and detokenize sensitive data -the pipeline will make a call to the Tuning service, wile it is migrating the data from the source to target, then the pipeline is migrating the data it will make a touring call. -not expecting data engineers that have migration experience, but it would be nice to have. -will be a real time pipeline, will need to do the change data capture, incremental data also needs to be migrated / one time activity will be 16TB of data plus any new data as part of the migration
Exclusive to Apex :Yes
Face to face interview required :No
Candidate must be local :No
Candidate must be authorized to work without sponsorship ::No
Interview times set :Yes
Type of project evelopment/Engineering
Master Job Title evOps: Applications
Branch Code :Richmond