Logo
Logistics Management Institute

Senior Data Engineer

Logistics Management Institute, Mc Lean, Virginia, us, 22107


OverviewAs a Data Engineer you will help develop and deploy technical solutions to solve our customers’ hardest problems, using various platforms to integrate data, transform insights, and build first-class applications for operational decisions. You will leverage everything around you: core customer products, open-source technologies, and anything you and your team can build to drive real impact. In this role, you work with customers around the globe, where you gain rare insight into the world’s most important industries and institutions. Each mission presents different challenges, from the regulatory environment to the nature of the data to the user population. You will work to accommodate all aspects of an environment to drive real technical outcomes for our customers.

LMI is a consultancy dedicated to powering a future-ready, high-performing government, drawing from expertise in digital and analytic solutions, logistics, and management advisory services. We deliver integrated capabilities that incorporate emerging technologies and are tailored to customers’ unique mission needs, backed by objective research and data analysis. Founded in 1961 to help the Department of Defense resolve complex logistics management challenges, LMI continues to enable growth and transformation, enhance operational readiness and resiliency, and ensure mission success for federal civilian and defense agencies.

LMI has been named a 2024 Best Places to Work by BuiltIn! We are honored to be recognized as a company that values a people-centered culture, and we are grateful to our employees for making this possible!

Responsibilities

Perform various ETL functions

Debug issues related to delayed or missing data feeds

Write transformations and derive new datasets

Monitor build progress and debug build problems

Rapid development and iteration cycles with SME’s including testing and troubleshooting application issues

Containerize workflows

Data modeling and metadata tagging

Create and maintain data access and sharing pipelines

Work with technical and non-technical team members to derive solutions

Develop architecture diagrams to proposed problems

Configuring cloud resources for data pipelining, storage, and retrieval

Qualifications

Bachelor’s degree in data science, mathematics, statistics, economics, computer science, engineering, or a related business or quantitative discipline (Masters degree preferred)

7+ years of work experience in data engineering, data science, software engineering or relevant other hands-on-keyboard experience

Proficiency with programming languages such as Python, Java or similar languages.

Working knowledge of databases and SQL; preferred qualifications include linking analytic and data visualization products to database connections

Experience with Test Driven Development (TDD)

Experience with Open-Source Tools, such as: Superset, Airflow, Kafka, Postgres, Docker, Helm (or similar), Kubernetes, Spark, etc

Experience building production-ready data pipelines

Experience developing commercial data products and services

Experience with streaming data is preferred

Experience with building solutions on both cloud services and on-prem deployments

Experience with the restrictions of working in classified engineering environments is preferred

Knowledge or experience with federal ATO/RMF processes

Knowledge or familiarity with IDAM/ICAM/IAM

Ability to work effectively in teams of technical and non-technical individuals.

Skill and comfort working in a rapidly changing environment with dynamic objectives and iteration with users.

Demonstrated ability to continuously learn, work independently, and make decisions with minimal supervision.

Proven track-record of strong communications including feedback gathering, execution updates, and troubleshooting.

Minimum clearance requirement: Ability and willingness to obtain a Secret clearance.