JPMorgan Chase & Co.
Senior Lead Software Engineer (Sr. AWS Solution Data Engineer)
JPMorgan Chase & Co., Plano, Texas, us, 75086
Be an integral part of an agile team that's constantly pushing the envelope to enhance, build, and deliver top-notch technology products.
As a Senior Lead Software Engineer at JPMorgan Chase within the Commercial & Investment Bank, specifically as a part of the Digital & Platform Services division, you will play a pivotal role in an agile team. Your responsibilities will encompass enhancing, creating, and delivering high-quality technology products in a secure, stable, and scalable way. Leveraging your technical prowess and problem-solving abilities, you will be instrumental in making a significant business impact and tackling a broad spectrum of challenges across diverse technologies and applications.
Job responsibilities:
Data Data Ingestion Pipelines : Design, develop, and maintain pipelines to handle large volumes of data, ensuring efficient data flow between sources and destinations.
Collaborate with data engineers : Work with data engineers, data scientists, and stakeholders to ensure data quality and consistency through normalization and integration.
Optimize performance : Identify and resolve bottlenecks in data processes to enhance performance and throughput.
Monitoring and Troubleshooting : Implement monitoring systems to detect and address data ingestion issues proactively.
Data Security : Ensure compliance and security by implementing access controls, encryption, and protection measures.
Documentation : Maintain detailed documentation of processes, configurations, and best practices.
Development Expertise : Use programming languages like Java or Python to manage data ingestion solutions, focusing on code quality and performance.
Required qualifications, capabilities, and skills:
Formal training or certification
in data engineering concepts and 5+ years applied experience, with an additional 4+ years of delivering hands-on Cloud Native solutions. AWS Services : Proficiency in AWS data services such as Glue, Athena, or Neptune. Database Knowledge : Experience with Graph Databases (Cypher, Gremlin) or Relational Databases (DML, DDL, PL/SQL). Cloud Services Management : Experience with Terraform for managing cloud services. CI/CD Integration : Experience with CI/CD processes using tools like git/Bitbucket, Jenkins, or Spinnaker. Data Pipeline Development : Experience with Spark, Glue, or similar technologies for building data pipelines. Data Formats : Proficiency in handling JSON, XML, and CSV formats. Full Delivery Execution : Demonstrated ability to design, develop, test, and document secure data systems. Interpersonal Skills : Strong communication skills for collaborative work across teams. This role requires a blend of technical expertise, problem-solving skills, and the ability to work collaboratively in a fast-paced, agile environment. Preferred qualifications, capabilities, and skills: Agile Environment : Experience in Agile Development and participation in Agile ceremonies. Full Stack Development : Experience with REST services using Java/Spring/Spring Boot. Test-Driven Development : Familiarity with modern source control and continuous integration. API and Data Interfaces : Experience with API, GraphQL, and data ingestion tools like Apache Kafka, Nifi, or similar. Data Warehousing : Understanding of Data Warehousing and Data Modeling on AWS Redshift. Big Data Technologies : Knowledge of distributed computing and big data technologies like Hadoop and Spark.
#J-18808-Ljbffr
in data engineering concepts and 5+ years applied experience, with an additional 4+ years of delivering hands-on Cloud Native solutions. AWS Services : Proficiency in AWS data services such as Glue, Athena, or Neptune. Database Knowledge : Experience with Graph Databases (Cypher, Gremlin) or Relational Databases (DML, DDL, PL/SQL). Cloud Services Management : Experience with Terraform for managing cloud services. CI/CD Integration : Experience with CI/CD processes using tools like git/Bitbucket, Jenkins, or Spinnaker. Data Pipeline Development : Experience with Spark, Glue, or similar technologies for building data pipelines. Data Formats : Proficiency in handling JSON, XML, and CSV formats. Full Delivery Execution : Demonstrated ability to design, develop, test, and document secure data systems. Interpersonal Skills : Strong communication skills for collaborative work across teams. This role requires a blend of technical expertise, problem-solving skills, and the ability to work collaboratively in a fast-paced, agile environment. Preferred qualifications, capabilities, and skills: Agile Environment : Experience in Agile Development and participation in Agile ceremonies. Full Stack Development : Experience with REST services using Java/Spring/Spring Boot. Test-Driven Development : Familiarity with modern source control and continuous integration. API and Data Interfaces : Experience with API, GraphQL, and data ingestion tools like Apache Kafka, Nifi, or similar. Data Warehousing : Understanding of Data Warehousing and Data Modeling on AWS Redshift. Big Data Technologies : Knowledge of distributed computing and big data technologies like Hadoop and Spark.
#J-18808-Ljbffr