JPMC Candidate Experience page
AWS Software Engineer III, Machine Learning
JPMC Candidate Experience page, Wilmington, Delaware, us, 19894
AWS Software Engineer III, Machine Learning
Minimum Requirements
Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Hands-on experience in AWS-EC2, S3, EMR and Sagemaker to migrate the machine learning models from Hadoop to AWS Good understanding and knowledge in planning and managing AWS Cloud infrastructure and enabling necessary platform security by using best-in-class Cloud Security Solutions Hands-on experience in migrating Spark applications from Bigdata Hadoop platform to AWS Cloud using AWS EMR, S3 Buckets, EBS, Lambda and EC2 Instances Hands-on experience in application development using EC2, S3, EMR, Lambda Provision AWS resources into multiple cloud accounts through automated pipeline using Terraform Experience in the AWS platform monitoring tools like Cloudwatch, Datadog for monitoring the utilization and in setting up the thresholds and performance alerts Hands-on experience in developing automation scripts using Python and Unix Shell Scripting Responsibilities
As a key member of an agile team, you'll manage model development infrastructure, contribute to the firm's goals, and advance your career in a vibrant environment Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture Design and set up Machine Learning platform infrastructure to develop and train machine learning models for Risk and Fraud Set up ML tools on AWS infrastructure through automated pipeline and using Terraform (Infrastructure as Code) Design and develop tools to support machine learning platform for critical CCB Risk and Fraud models Implement performance tuning EMR jobs and best practices in Spark job development Enable distributed machine learning techniques on Xgboost, TensorFlow, Scikit-learn using EMR RAPIDS, Spark, Dask, and Sagemaker
#J-18808-Ljbffr
Formal training or certification on software engineering concepts and 3+ years applied experience Hands-on practical experience in system design, application development, testing, and operational stability Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Hands-on experience in AWS-EC2, S3, EMR and Sagemaker to migrate the machine learning models from Hadoop to AWS Good understanding and knowledge in planning and managing AWS Cloud infrastructure and enabling necessary platform security by using best-in-class Cloud Security Solutions Hands-on experience in migrating Spark applications from Bigdata Hadoop platform to AWS Cloud using AWS EMR, S3 Buckets, EBS, Lambda and EC2 Instances Hands-on experience in application development using EC2, S3, EMR, Lambda Provision AWS resources into multiple cloud accounts through automated pipeline using Terraform Experience in the AWS platform monitoring tools like Cloudwatch, Datadog for monitoring the utilization and in setting up the thresholds and performance alerts Hands-on experience in developing automation scripts using Python and Unix Shell Scripting Responsibilities
As a key member of an agile team, you'll manage model development infrastructure, contribute to the firm's goals, and advance your career in a vibrant environment Execute software solutions, design, development, and technical troubleshooting with the ability to think beyond routine or conventional approaches to build solutions or break down technical problems Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Identify hidden problems and patterns in data and use these insights to drive improvements to coding hygiene and system architecture Design and set up Machine Learning platform infrastructure to develop and train machine learning models for Risk and Fraud Set up ML tools on AWS infrastructure through automated pipeline and using Terraform (Infrastructure as Code) Design and develop tools to support machine learning platform for critical CCB Risk and Fraud models Implement performance tuning EMR jobs and best practices in Spark job development Enable distributed machine learning techniques on Xgboost, TensorFlow, Scikit-learn using EMR RAPIDS, Spark, Dask, and Sagemaker
#J-18808-Ljbffr