A-Line Staffing Solutions
AI Big Data Engineer
A-Line Staffing Solutions, Troy, Michigan, United States, 48083
We Are actively building out an AI team. We are looking for people with a proven track record, who are willing to experiment with new ideas, invest time in them, fail-fast, and move on if they don't work out. We want team members who have shown a consistent interest in continuous learning, especially in Cloud Technologies, who are aware of, and follow the latest and best generative AI technologies and trends, can learn by themselves, are self-motivated and value self-directed initiative with technology and AI exploration. Summary As a Data Engineer, you will focus on creating a Unified Data Platform. You will design, develop, and maintain data pipelines, data lakes, and data platforms that support the analytics and business intelligence needs of our clients. You will work with cutting-edge technologies and tools, such as Spark, Kafka, AWS, Azure, and Kubernetes, to handle large-scale and complex data challenges. You will also collaborate with full stack developers, data scientists, analysts, and stakeholders to ensure data quality, reliability, and usability. You must be comfortable working with huge datasets. NO C2C Main Responsibilities Build automated pipelines to extract and process data from a variety of legacy platforms (predominantly SQL Server), e.g., in stored procedures, Glue processing, etc. Implement data-related business logic on modern data platforms, such as AWS Glue, Databricks, and Azure using best practices and industry standards. Create vector databases, data marts and the data models to support them Optimize and monitor the performance, reliability, and security of data systems and processes. Integrate and transform data from (or to) various sources and formats, such as structured, unstructured, streaming, and batch. Develop and maintain data quality checks, tests, and documentation. Support data analysis, reporting, and visualization using tools such as SQL, Python, Tableau and Quicksight Research and evaluate new data technologies and trends to improve data solutions and existing capabilities. Qualifications And Skills Bachelor's degree or higher in Computer Science, Engineering, Mathematics, or a related field At least 5 years of experience in data engineering or a similar role (previous DBA experience is a plus) Experience with big data frameworks and tools, such as Spark, Hadoop, Kafka and Hive Expert in SQL, including a knowledge of efficient query and schema design, DDL, data modeling and use of stored procedures Proficient in at least one programming language, such as Python, Go or Java Experience with CI/CD, containerization (ex: docker, K8s) and orchestration (ex: Airflow) Experience building production systems with more modern ETL, ELT and data systems, such as AWS Glue, Databricks, Snowflake, Elastic, and Azure Cognitive Search Experience deploying data infrastructure on cloud platforms (AWS, Azure, or GCP) Strong knowledge of data quality, data governance, and data security principles and practices Excellent communication, collaboration, and problem-solving