Job Title: Bigdata Engineer
Location: Chicago, IL/ Richmond, VA/ Mclean, VA/ Wilmington, DE
Type of hire: Contract
• 5+ years of experience (Sr-level) Strong Programming experience with object-oriented/object function scripting languages: Scala or Python
• 5+ years of experience Experience with big data tools: Hadoop, Apache Spark etc
• 1+ years of strong technical Experience with AWS cloud services and DevOps engineering: S3, IAM, EC2, EMR, RDS, Redshift, Cloudwatch with Docker, Kubernetes, GitHub, Jenkins, CICD
• Experience with stream-processing systems: Python, Spark-Streaming, etc. (Nice to have)
• 1+ Years of experience with relational SQL or, Snowflake and NoSQL databases, like Postgres and Cassandra.
• Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘Big data’ technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Emr
Hadoop
Spark
Location: Chicago, IL/ Richmond, VA/ Mclean, VA/ Wilmington, DE
Type of hire: Contract
• 5+ years of experience (Sr-level) Strong Programming experience with object-oriented/object function scripting languages: Scala or Python
• 5+ years of experience Experience with big data tools: Hadoop, Apache Spark etc
• 1+ years of strong technical Experience with AWS cloud services and DevOps engineering: S3, IAM, EC2, EMR, RDS, Redshift, Cloudwatch with Docker, Kubernetes, GitHub, Jenkins, CICD
• Experience with stream-processing systems: Python, Spark-Streaming, etc. (Nice to have)
• 1+ Years of experience with relational SQL or, Snowflake and NoSQL databases, like Postgres and Cassandra.
• Create and maintain optimal data pipeline architecture, Assemble large, complex data sets that meet functional / non-functional business requirements.
• Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability etc.
• Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘Big data’ technologies.
• Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
• Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.
• Create data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Recommended Skills
AwsEmr
Hadoop
Spark