Logo
Virtual

Data Engineer

Virtual, Los Angeles, California, 90079


Job Title : Data Engineer Job Overview : We are seeking a skilled Data Engineer to design, build, and maintain scalable and efficient data pipelines and infrastructure. The ideal candidate will have strong expertise in data integration, transformation, and storage, ensuring the availability and accessibility of data for business analytics and decision-making. Key Responsibilities Data Pipeline Development : Design, implement, and optimize data pipelines to process and transform raw data into usable formats for analytics and reporting. Data Integration : Integrate data from various sources, including APIs, databases, and third-party platforms, ensuring data quality and consistency. Database Management : Develop and maintain relational and non-relational databases, optimizing performance for scalability and high availability. ETL Processes : Build and manage Extract, Transform, and Load (ETL) workflows for data ingestion, transformation, and storage. Data Governance : Ensure compliance with data governance policies, including security, privacy, and quality standards. Collaboration : Work closely with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Performance Tuning : Monitor and improve the performance of data systems, including database query optimization and infrastructure scaling. Documentation : Create and maintain comprehensive documentation for data architecture, pipelines, and processes. Required Qualifications Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field. Proven experience in data engineering or a similar role. Strong programming skills in languages such as Python, Java, or Scala. Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, MongoDB). Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and related tools like Redshift, BigQuery, or Snowflake. Familiarity with big data frameworks such as Hadoop, Spark, or Kafka. Knowledge of data modeling and schema design for structured and unstructured data. Hands-on experience with ETL tools and data workflow automation (e.g., Apache Airflow, Talend). Preferred Qualifications Master’s degree in a related field. Experience with real-time data processing and streaming technologies. Understanding of data visualization and reporting tools (e.g., Tableau, Power BI). Knowledge of machine learning pipelines and data science workflows. Familiarity with containerization and orchestration tools like Docker and Kubernetes. Key Competencies Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. High attention to detail and commitment to data quality. Ability to work in a fast-paced, dynamic environment and handle multiple tasks.