TechIntelli Solutions
Staffing Manager @ TechIntelli Solutions | Extensive Recruitment Strategies
Location:
Remote About the Role: We are seeking an exceptional Principal Software Engineer with deep expertise in Cloud & Big Data Engineering to drive multiple high-impact projects. The ideal candidate will have hands-on experience with distributed systems, real-time data processing, and cloud-native architectures. This role requires strong technical leadership and an ability to work in a fast-paced environment with cutting-edge technologies. Key Responsibilities: Design & develop scalable and high-performance data pipelines using Java, Apache Spark, PySpark, Scala, and Flink. Build and optimize real-time data streaming applications using Kafka and Flink. Develop and manage ETL workflows using AWS Glue, Redshift, and SQL-based transformations for business intelligence. Work on open-source projects, contributing to communities and enhancing distributed computing frameworks such as Apache Spark, Apache Flink, Apache Iceberg, and Apache Hudi. Implement containerized big data solutions using Kubernetes & Docker. Automate infrastructure provisioning using Terraform in AWS/GCP environments. Optimize performance of distributed systems and improve scalability, reliability, and efficiency. Collaborate with data scientists, engineers, and stakeholders to drive innovation in big data solutions. Required Skills & Qualifications: 8+ years of experience in software engineering with a focus on big data and cloud technologies. Strong proficiency in Java, Scala, PySpark, and SQL for large-scale data processing. Expertise in Apache Spark, Apache Flink, Apache Iceberg, Apache Hudi, Hadoop, and other distributed computing frameworks. Hands-on experience with real-time data streaming (Kafka, Flink). Deep knowledge of AWS or GCP cloud services, including data storage, compute, and security best practices. Proficiency in Kubernetes & Docker for deploying scalable data applications. Experience with Terraform and infrastructure-as-code (IaC) practices. Strong understanding of ETL pipelines, data warehousing, and performance tuning. A track record of contributions to open-source projects is highly desirable. Excellent problem-solving, leadership, and communication skills. Preferred Qualifications: Experience in CI/CD pipelines for data engineering workflows. Knowledge of machine learning pipelines and integration with big data ecosystems. Familiarity with GraphQL, REST APIs, and microservices architecture. If you are passionate about building cloud-native big data solutions and pushing the boundaries of distributed computing, we want to hear from you! Seniority level:
Director Employment type:
Contract Job function:
Information Technology
#J-18808-Ljbffr
Remote About the Role: We are seeking an exceptional Principal Software Engineer with deep expertise in Cloud & Big Data Engineering to drive multiple high-impact projects. The ideal candidate will have hands-on experience with distributed systems, real-time data processing, and cloud-native architectures. This role requires strong technical leadership and an ability to work in a fast-paced environment with cutting-edge technologies. Key Responsibilities: Design & develop scalable and high-performance data pipelines using Java, Apache Spark, PySpark, Scala, and Flink. Build and optimize real-time data streaming applications using Kafka and Flink. Develop and manage ETL workflows using AWS Glue, Redshift, and SQL-based transformations for business intelligence. Work on open-source projects, contributing to communities and enhancing distributed computing frameworks such as Apache Spark, Apache Flink, Apache Iceberg, and Apache Hudi. Implement containerized big data solutions using Kubernetes & Docker. Automate infrastructure provisioning using Terraform in AWS/GCP environments. Optimize performance of distributed systems and improve scalability, reliability, and efficiency. Collaborate with data scientists, engineers, and stakeholders to drive innovation in big data solutions. Required Skills & Qualifications: 8+ years of experience in software engineering with a focus on big data and cloud technologies. Strong proficiency in Java, Scala, PySpark, and SQL for large-scale data processing. Expertise in Apache Spark, Apache Flink, Apache Iceberg, Apache Hudi, Hadoop, and other distributed computing frameworks. Hands-on experience with real-time data streaming (Kafka, Flink). Deep knowledge of AWS or GCP cloud services, including data storage, compute, and security best practices. Proficiency in Kubernetes & Docker for deploying scalable data applications. Experience with Terraform and infrastructure-as-code (IaC) practices. Strong understanding of ETL pipelines, data warehousing, and performance tuning. A track record of contributions to open-source projects is highly desirable. Excellent problem-solving, leadership, and communication skills. Preferred Qualifications: Experience in CI/CD pipelines for data engineering workflows. Knowledge of machine learning pipelines and integration with big data ecosystems. Familiarity with GraphQL, REST APIs, and microservices architecture. If you are passionate about building cloud-native big data solutions and pushing the boundaries of distributed computing, we want to hear from you! Seniority level:
Director Employment type:
Contract Job function:
Information Technology
#J-18808-Ljbffr