Leidos is seeking a Software Engineer for a position located in the Reston, VA area. This position requires an active TS/SCI with Polygraph in order to be considered.
The SE will split their time and b a shared (50/50) resource between two business analytic units, responsible for ingesting, analyzing, parsing and conditioning data to populate a common data layer using modern DevOps principles, and creating efficient data structures to support data science activities, which feed into data points that assist executive leadership with executing tactical and strategic business decisions. The two units use a common computation platform, but each unit has a distinct environment with its own access-controlled data. There is a significant commitment to invest in business analytic capabilities that enable data driven decision-making at the executive and working levels. Supporting this requires the aggregation, modeling, and fusing of disparate data types from diverse business functions into a single data repository.
Primary Responsibilities
The Software Engineer will work closely with members of two business analytic units to support identified data needs and coordinate with a data services team to ensure the proper population of business data in the target data science environment. Software Engineer support for the business analytic teams will include:
Creation of data ingestion workflows to populate a common data layer with a standard formatted data product. Using the data ingestion framework provided by the data services team to streamline the addition of new data sets to the common data layer. Maintaining the data pipeline supporting analytic capabilities developed by the business analytic teams to ensure that data continues to flow into the system on a reliable basis and in concert with the frequency in which source data is updated. Creation of efficient data structures, in line with the data services team’s best practices that can be used to streamline access to common business and analytical functions. Providing technical support to members of the business analytic teams; applying knowledge gained from interaction with users to improve and advance the analytic environment by engaging with the environment DevOps team to ensure evolving needs/capabilities are being captured and developed.
Basic Qualifications (Mandatory Skills)
(Desired) Java Certification
External Referral Bonus:
Eligible
External Referral Bonus $:
Potential for Telework:
No
Clearance Level Required:
Top Secret/SCI with Polygraph
Travel:
Yes, 10% of the time
Scheduled Weekly Hours:
40
Shift:
Day
Requisition Category:
Professional
Job Family:
Software Development
Pay Range:
Apache Spark
Data Ingestion
Docker
Avro
Amazon S3
The SE will split their time and b a shared (50/50) resource between two business analytic units, responsible for ingesting, analyzing, parsing and conditioning data to populate a common data layer using modern DevOps principles, and creating efficient data structures to support data science activities, which feed into data points that assist executive leadership with executing tactical and strategic business decisions. The two units use a common computation platform, but each unit has a distinct environment with its own access-controlled data. There is a significant commitment to invest in business analytic capabilities that enable data driven decision-making at the executive and working levels. Supporting this requires the aggregation, modeling, and fusing of disparate data types from diverse business functions into a single data repository.
Primary Responsibilities
The Software Engineer will work closely with members of two business analytic units to support identified data needs and coordinate with a data services team to ensure the proper population of business data in the target data science environment. Software Engineer support for the business analytic teams will include:
Creation of data ingestion workflows to populate a common data layer with a standard formatted data product. Using the data ingestion framework provided by the data services team to streamline the addition of new data sets to the common data layer. Maintaining the data pipeline supporting analytic capabilities developed by the business analytic teams to ensure that data continues to flow into the system on a reliable basis and in concert with the frequency in which source data is updated. Creation of efficient data structures, in line with the data services team’s best practices that can be used to streamline access to common business and analytical functions. Providing technical support to members of the business analytic teams; applying knowledge gained from interaction with users to improve and advance the analytic environment by engaging with the environment DevOps team to ensure evolving needs/capabilities are being captured and developed.
Basic Qualifications (Mandatory Skills)
- Bachelors Degree and 8 - 12 years of relevant experience or Masters with 6 - 10 years of prior relevant experience. May possess a Doctorate in technical domain.
- Demonstrated experience with Python programming
- Demonstrated experience with Java programming
- Demonstrated experience with General Linux computing and advanced bash scripting
- Demonstrated experience with AWS services such as RDS, S3, and EC2
- Demonstrated experience constructing complex multi-data source SQL queries on a variety of relational database technologies, for example, Oracle, PostgreSQL, MySQL, etc.
- Demonstrated experience defining and implementing robust and scalable data structures, e.g. SQL DDL, to support centralized data repositories that enable data science and data analytic activities
- Demonstrated experience with Code repositories and build/deployment pipelines, specifically using Jenkins and/or Git
- Demonstrated experience developing data pipelines to bring data into a central environment in support of multi-member teams comprised of data scientists and data analysts
- Demonstrated experience with Apache Hadoop and/or Apache Spark stack for big data processing
- Demonstrated experience with NiFi
- Demonstrated experience with Docker, Kubernetes, and/or other similar container frameworks
- Demonstrated experience with Avro, Parquet, and/or similar advanced data storage formats
- Demonstrated experience with Tableau, Apache Superset, and/or other similar data visualizations tools
(Desired) Java Certification
External Referral Bonus:
Eligible
External Referral Bonus $:
Potential for Telework:
No
Clearance Level Required:
Top Secret/SCI with Polygraph
Travel:
Yes, 10% of the time
Scheduled Weekly Hours:
40
Shift:
Day
Requisition Category:
Professional
Job Family:
Software Development
Pay Range:
Recommended Skills
Data PipelineApache Spark
Data Ingestion
Docker
Avro
Amazon S3