Logo
WorkWave LLC

Senior Data Engineer, Remote

WorkWave LLC, Trenton, New Jersey, United States,


WorkWave is seeking a motivated Senior Data Engineer to join our growing AI team. In this role, you will work closely with the Artificial Intelligence Architect and the Senior Director of Artificial Intelligence Strategy to design, build, and optimize data pipelines and infrastructure that drive our AI and Machine Learning (ML) capabilities. Your primary focus will be on supporting the development of scalable, high-performing data platforms that empower our AI initiatives and ensure data is accurate, consistent, and accessible.As a Senior Data Engineer, you will be an integral part of our data-driven journey, directly supporting the strategic vision of our AI/ML programs. You will collaborate across teams, working with business unit leaders, data scientists, and software engineers to help shape the data architecture needed to advance our technical capabilities.WHAT YOU'LL DO:

Data Pipeline Design & Implementation: Build and maintain robust data pipelines and ETL processes that ingest, clean, transform, and load data from various internal and external sources. Ensure scalability, security, and high availability.Data Infrastructure Development: Design, implement, and manage data infrastructure, including data lakes, warehouses, and databases, to support data analytics and AI/ML workflows.Collaboration with AI/ML Teams: Partner with data scientists, data analysts, and the AI Architect to understand data requirements and build data models that support advanced AI/ML use cases.Data Quality Management: Develop and implement data quality frameworks to ensure data is accurate, consistent, and reliable across the organization.Technology Evaluation & Integration: Evaluate and integrate new data technologies and tools to improve data infrastructure, ensuring alignment with current and future requirements.Data Governance & Security: Collaborate with security teams to establish data governance policies and practices, ensuring compliance with regulations and best practices in data security and privacy.Performance Tuning & Optimization: Optimize data processing performance, troubleshoot bottlenecks, and proactively improve data pipeline efficiency.Documentation & Best Practices: Develop and maintain comprehensive documentation and follow best practices for data engineering and architecture.WHAT YOU'LL BRING:

Bachelor’s degree in Computer Science, Engineering, or a related technical discipline is required.Minimum 2 years of experience in data engineering or a related field, including building and optimizing data pipelines.Proven experience in supporting the design, build, and management of data infrastructure for AI/ML projects.Strong coding skills in Python, Java, or similar programming languages.Experience with cloud environments (AWS preferred) and big data technologies.Effective collaboration skills with the ability to work cross-functionally.Strong proficiency in data pipeline and ETL design, with hands-on experience in tools like Apache Airflow, Apache NiFi, or similar.Advanced knowledge of SQL, database management systems (e.g., PostgreSQL, Redshift), and data warehousing solutions.Expertise in programming languages such as Python or Java, and experience in data processing frameworks (e.g., Apache Spark, Kafka).Familiarity with modern cloud platforms (e.g., AWS, GCP, Azure) and their data engineering services.Experience with data modeling, data architecture, and big data technologies (e.g., Hadoop, Hive).Understanding of MLOps and DevOps principles, with knowledge of CI/CD pipelines for data workflows.Proven ability to communicate technical details with both technical and non-technical stakeholders.

#J-18808-Ljbffr