Ciient is looking for an Azure Data Engineer to work 100% remote on a 12+ month contract supporting a team primarily based in AZ with presence in other Countries, such as Poland.
Candidates must have hands-on experience building and deploying Azure Databricks pipelines to Snowflake using PySpark. If someone has not built AND deployed themselves, please DO NOT submit.
RESPONSIBILITIES
Nice to Have
Interested candidates can send updated resume to ganga.bhavani@cerebra-cosnulting.com
Candidates must have hands-on experience building and deploying Azure Databricks pipelines to Snowflake using PySpark. If someone has not built AND deployed themselves, please DO NOT submit.
RESPONSIBILITIES
- Prototype, build, deploy and manage data engineering pipelines
- Work with other data engineers, business intelligence experts and data scientists to solve real-life, challenging business problems
- Work with other senior members and leadership on the development and optimization of data environment to centralize data access to business users
- Work on processes to optimize and monitor complex data based on governance strategy along with providing feedback to make sure the strategy aligns with available technical resources and provide recommendations for solutions when needed
- Work within a group to identify best practice for achieving a stakeholder's end goal.
- Share knowledge and/or assist each other with thinking through a problem to develop a solution
- Work with other departments outside of Global BI group to assist with gathering knowledge needed to fine tune technical requirements to ensure high quality data is consumed and optimized for business use cases
- Work within a group to produce scalable data processes to be used in platforms for both exploratory analytics and near/real-time analytics for other members within Global BI and business users
- Be able to explain to business users the benefits of this data method and provide examples on how to access the platform along with usage
- Provide insight to the team and leadership best practices whether from experience and/or knowledge acquired through other sources (industry research articles, training, etc.)
- Degree in computer science or related field.
- 3+ years of relevant professional experience.
- Experience with batch and real-time data processing tools and technologies: Azure Databricks/Spark, Azure Synapse/DW, Azure Analysis Services, Azure Data Factory
- Deep experience with data engineering, big data and analytical technologies using Azure cloud-based data platforms
- Extensive experience in SQL, Python (3.x), and PySpark
- Knowledge of distributed data solutions, storage systems and columnar databases
- Familiarity with Continuous Integration/Continuous Deployment, Git.
- Fluent in spoken and written English.
- Knowledge about Agile development methods like Scrum and Kanban.
Nice to Have
- R, Kafka
- Knowledge of key machine learning concepts & ML frameworks (like scikit-learn, H2O.ai, Keras, etc.)
Interested candidates can send updated resume to ganga.bhavani@cerebra-cosnulting.com