Logo
NuWave Solutions

Data Engineer, Staff

NuWave Solutions, Washington, District of Columbia, us, 20022


Overview

BigBear.ai is seeking a Data Engineer to work with one of our clients in Washington, DC or in the Hampton Roads area. This position will play a crucial role in designing, developing, and maintaining our Advana data infrastructure and systems. Your expertise in ETL, Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks will be essential in ensuring efficient data processing and analysi

This position will be based out of Washington, DC or Norfolk, VA and will offer remote flexibility on an as needed basis. Eligible candidates will possess an active Secret clearance.

The primary role will be onsite 5 days/week for appropriate classification work. As a Data Engineer, you will play a crucial role in designing, developing, and maintaining our Advana data infrastructure and systems. Your expertise in ETL, Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks will be essential in ensuring efficient data processing and analysi

What you will do

Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.Create and optimize data pipelines from scratch, ensuring scalability, reliability, and high-performance processing.Perform data cleansing, data integration, and data quality assurance activities to maintain the accuracy and integrity of large datasets.Leverage big data technologies to efficiently process and analyze large datasets, particularly those encountered in a federal agency.Troubleshoot data-related problems and provide innovative solutions to address complex data challenges.Implement and enforce data governance policies and procedures, ensuring compliance with regulatory requirements and industry best practices.Work closely with cross-functional teams to understand data requirements and design optimal data models and architectures.Collaborate with data scientists, analysts, and stakeholders to provide timely and accurate data insights and support decision-making processes.Maintain documentation for software applications, workflows, and processes.Stay updated with emerging trends and advancements in data engineering and recommend suitable tools and technologies for continuous improvement.What you need to have

Secret clearanceBachelor's DegreeMinimum of 3 years of experience as a Data Engineer, with demonstrated experience creating data pipelines from scratch.High level of proficiency in ETL processes and demonstrated, hands-on experience with technologies: Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.Strong problem-solving skills and ability to solve complex data-related issues.Demonstrated experience working with large datasets and leveraging big data technologies to process and analyze data efficiently.Understanding of data modeling/visualization, database design principles, and data governance practices.Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.Detail-oriented mindset with a commitment to delivering high-quality results.Must be in the National Capital Region or Hampton Roads area and available to work onsite (Crystal City, VA and Alexandria, VA OR Norfolk, VA) 5 days per week, with potential for remote work on secondary tasking.What we'd like you to have

TS/SCI clearance or eligibilityKnowledge of Qlik/Qlik Sense, QVD/QlikView, and Qlik Production Application Standards (QPAS) is a significant plus.Recent DoD or IC-related experience.High level of Databricks proficiencyPrevious experience with Advana is a plus.About BigBear.ai

BigBear.ai is a leading provider of AI-powered decision intelligence solutions for national security, supply chain management, and digital identity. Customers and partners rely on BigBear.ai’s predictive analytics capabilities in highly complex, distributed, mission-based operating environments. Headquartered in Columbia, Maryland, BigBear.ai is a public company traded on the NYSE under the symbol BBAI. For more information, visit https://bigbear.ai/ and follow BigBear.ai on LinkedIn: @BigBear.ai and X: @BigBearai.

What you will do

Design, develop, and implement end-to-end data pipelines, utilizing ETL processes and technologies such as Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.Create and optimize data pipelines from scratch, ensuring scalability, reliability, and high-performance processing.Perform data cleansing, data integration, and data quality assurance activities to maintain the accuracy and integrity of large datasets.Leverage big data technologies to efficiently process and analyze large datasets, particularly those encountered in a federal agency.Troubleshoot data-related problems and provide innovative solutions to address complex data challenges.Implement and enforce data governance policies and procedures, ensuring compliance with regulatory requirements and industry best practices.Work closely with cross-functional teams to understand data requirements and design optimal data models and architectures.Collaborate with data scientists, analysts, and stakeholders to provide timely and accurate data insights and support decision-making processes.Maintain documentation for software applications, workflows, and processes.Stay updated with emerging trends and advancements in data engineering and recommend suitable tools and technologies for continuous improvement.What you need to have

Secret clearanceBachelor's DegreeMinimum of 3 years of experience as a Data Engineer, with demonstrated experience creating data pipelines from scratch.High level of proficiency in ETL processes and demonstrated, hands-on experience with technologies: Databricks, Python, Spark, Scala, JavaScript/JSON, SQL, and Jupyter Notebooks.Strong problem-solving skills and ability to solve complex data-related issues.Demonstrated experience working with large datasets and leveraging big data technologies to process and analyze data efficiently.Understanding of data modeling/visualization, database design principles, and data governance practices.Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams.Detail-oriented mindset with a commitment to delivering high-quality results.Must be in the National Capital Region or Hampton Roads area and available to work onsite (Crystal City, VA and Alexandria, VA OR Norfolk, VA) 5 days per week, with potential for remote work on secondary tasking.