Asystem
Principal Software Engineer
Asystem, New York, New York, us, 10261
North is an AWS optimization platform that automates your cloud finance in 3 clicks, allowing you to save 50% on your cloud expenses while spending zero time doing so.Summary:We are looking for an experienced and motivated DevOps engineer with a focus in Data Engineering to join us as we scale our AWS based infrastructure and introduce exciting features for our current and future customers. You will be working closely with the CTO and Engineering team on features that will automate the future of cloud FinOps. This is a perfect opportunity to join an early-stage VC funded and high growth start up in the FinTech space.Responsibilities:AWS Infrastructure Management:
Proactively maintain, monitor, and enhance our application's serverless micro-service architecture on AWS to ensure maximum reliability and performance. Employ best practices in infrastructure as code (IaC) using Terraform, enabling scalable and manageable deployments. Comfortable building and deploying Lambda, API Gateway, Step Functions, DynamoDB, CloudWatch, IAM roles, S3, Athena, Glue, SSM, and others.Cost Optimization:
Vigilantly monitor AWS service costs and usage to identify optimization opportunities. Develop and implement strategies to keep operational costs at a minimum while maintaining service reliability and performance standards. Our primary objective is to achieve rate optimization for our clients, which necessitates implementing cost optimization strategies internally before proposing them to our clients.Compliance and Security:
Spearhead initiatives to ensure and maintain SOC2 compliance across all aspects of our backend infrastructure. Craft and maintain code that adheres to security best practices, with a strong emphasis on limited permissions and robust authorization mechanisms to safeguard against vulnerabilities. Strong understanding of OAuth and RBAC principles.Development and Maintenance:
Collaborate closely with the CTO and development teams to devise and implement new features, as well as improve existing functionalities. Take charge of diving into and refining existing codebases for enhanced performance, maintainability, and scalability.Data Pipeline Management:
Design, code, and manage industry-standard data pipelines, optimizing for efficiency and integration for machine learning processes. Ensure data flow adheres to ACID principles and enhance data processing capabilities to support predictive analytics and machine learning-driven applications effectively.AWS Cost and Usage Analysis:
Possess a deep understanding of AWS service costs, Savings Plans, Reserved Instances, usage patterns, and optimization techniques. Regularly provide recommendations for cost-effective resource utilization without compromising on performance or reliability.Technical Expertise:
Demonstrate advanced proficiency in Python, with the ability to develop complex applications, scripts, and automation tools. Utilize this expertise in API development, focusing on optimizing direct AWS service integration, security, and authorization protocols.Infrastructure as Code (IaC):
Deploy and manage infrastructure using Terraform, creating reusable modules that promote efficiency and consistency across deployment processes. Embrace and advocate for IaC principles to streamline infrastructure provisioning and management tasks.Independence:
Act as a self-starter with the capability to work independently, requiring minimal guidance. Take ownership of product features from conception to deployment, organizing timelines and resources effectively to meet project milestones and objectives.Collaboration and Problem Solving:
Engage in productive collaboration with the CTO and development teams to address, debug, and resolve backend-related issues promptly. Exhibit strong problem-solving skills, with a focus on delivering solutions that enhance system reliability and functionality.Requirements:Bachelor’s degree in Computer Science, Software Engineering, or a related field. This can be substituted for strong knowledge and experience on AWS and computer science fundamentals.AWS Associate Level Certification (Technical DevOps or Solution Architect preferred)Solid experience with AWS services and backend development (Lambda, API Gateway, DynamoDB, Glue, Athena, SQS, SNS)Proficient understanding of data engineering concepts (SQL, pySpark, Dataframes, numpy)Experience using and maintaining scalable infrastructure on TerraformAbility to diagnose and troubleshoot complex technical issues through CloudWatch and setting up solid unit testing.Strong knowledge of Git and code version controlStrong coding skills in Python preferred, SQL optional.Effective verbal and written communication skills.Nice to have:Experience working in a high-paced start-up environment.Principles of code low latency.Understanding of AWS security principlesFamiliar with Agile methodologies, JIRAFamiliarity with AWS FinOps and CUR concepts is a plusFront-end experienceGCP experience
#J-18808-Ljbffr
Proactively maintain, monitor, and enhance our application's serverless micro-service architecture on AWS to ensure maximum reliability and performance. Employ best practices in infrastructure as code (IaC) using Terraform, enabling scalable and manageable deployments. Comfortable building and deploying Lambda, API Gateway, Step Functions, DynamoDB, CloudWatch, IAM roles, S3, Athena, Glue, SSM, and others.Cost Optimization:
Vigilantly monitor AWS service costs and usage to identify optimization opportunities. Develop and implement strategies to keep operational costs at a minimum while maintaining service reliability and performance standards. Our primary objective is to achieve rate optimization for our clients, which necessitates implementing cost optimization strategies internally before proposing them to our clients.Compliance and Security:
Spearhead initiatives to ensure and maintain SOC2 compliance across all aspects of our backend infrastructure. Craft and maintain code that adheres to security best practices, with a strong emphasis on limited permissions and robust authorization mechanisms to safeguard against vulnerabilities. Strong understanding of OAuth and RBAC principles.Development and Maintenance:
Collaborate closely with the CTO and development teams to devise and implement new features, as well as improve existing functionalities. Take charge of diving into and refining existing codebases for enhanced performance, maintainability, and scalability.Data Pipeline Management:
Design, code, and manage industry-standard data pipelines, optimizing for efficiency and integration for machine learning processes. Ensure data flow adheres to ACID principles and enhance data processing capabilities to support predictive analytics and machine learning-driven applications effectively.AWS Cost and Usage Analysis:
Possess a deep understanding of AWS service costs, Savings Plans, Reserved Instances, usage patterns, and optimization techniques. Regularly provide recommendations for cost-effective resource utilization without compromising on performance or reliability.Technical Expertise:
Demonstrate advanced proficiency in Python, with the ability to develop complex applications, scripts, and automation tools. Utilize this expertise in API development, focusing on optimizing direct AWS service integration, security, and authorization protocols.Infrastructure as Code (IaC):
Deploy and manage infrastructure using Terraform, creating reusable modules that promote efficiency and consistency across deployment processes. Embrace and advocate for IaC principles to streamline infrastructure provisioning and management tasks.Independence:
Act as a self-starter with the capability to work independently, requiring minimal guidance. Take ownership of product features from conception to deployment, organizing timelines and resources effectively to meet project milestones and objectives.Collaboration and Problem Solving:
Engage in productive collaboration with the CTO and development teams to address, debug, and resolve backend-related issues promptly. Exhibit strong problem-solving skills, with a focus on delivering solutions that enhance system reliability and functionality.Requirements:Bachelor’s degree in Computer Science, Software Engineering, or a related field. This can be substituted for strong knowledge and experience on AWS and computer science fundamentals.AWS Associate Level Certification (Technical DevOps or Solution Architect preferred)Solid experience with AWS services and backend development (Lambda, API Gateway, DynamoDB, Glue, Athena, SQS, SNS)Proficient understanding of data engineering concepts (SQL, pySpark, Dataframes, numpy)Experience using and maintaining scalable infrastructure on TerraformAbility to diagnose and troubleshoot complex technical issues through CloudWatch and setting up solid unit testing.Strong knowledge of Git and code version controlStrong coding skills in Python preferred, SQL optional.Effective verbal and written communication skills.Nice to have:Experience working in a high-paced start-up environment.Principles of code low latency.Understanding of AWS security principlesFamiliar with Agile methodologies, JIRAFamiliarity with AWS FinOps and CUR concepts is a plusFront-end experienceGCP experience
#J-18808-Ljbffr