HashiCorp
SIEM Security Engineer (US)
HashiCorp, San Francisco, California, United States, 94199
P3
US - Remote
JR103862
About the team We're looking for talented Data Engineers to join our Threat Detection and Response Team (TDR). This team will help defend HashiCorp by enhancing strategic detection, response, and prevention patterns across all of our products and the enterprise. This person will be responsible for expanding and maturing our approach to delivering visibility across all major cloud providers to ensure we have an accurate record of actions performed across each layer of our technology stacks. About Us HashiCorp is a fast-growing organization that solves development, operations, and security challenges in infrastructure so organizations can focus on business-critical tasks. We build tools to ease these decisions by presenting solutions that span the gaps. Our tools manage both physical machines and virtual machines, Windows, and Linux, SaaS and IaaS, etc. Our open source software is used by millions of users to provision, secure, connect, and run any infrastructure for any application. The Global 2000 uses our enterprise software to accelerate application delivery and drive innovation through software.
What you'll do (responsibilities) As a member of our Security team, you'll be responsible for ensuring we have the best practices implemented across our multi-cloud environment. You will partner with engineering and other stakeholders to define and drive secure by default environments supporting our products and the enterprise. We're heavily invested in tooling and automation, and the ability to continually improve these areas will be key to success as we scale our environments to meet customer demand. Engineering at HashiCorp is largely a remote team. While prior experience working remotely isn't required, we are looking for team members who perform well given a high level of independence and autonomy. HashiCorp embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be. What you'll need (basic qualifications) 2+ years in an engineering role focused on large scale data collection in the cloud, using cloud-native tooling Working knowledge of batch or streaming data processing pipelines Collect, Normalize, Tag, Enrich Windowing and time series transformation Working knowledge of patterns of information retrieval and optimizing query workload Developing aggregates, views, summaries, and indices to accelerate access to data Profiling query workloads using query planner output or other diagnostic tooling to identify performance bottlenecks Profiling resource consumption to optimize expenditure on storage and transit Planning, dispatching, and monitoring query workload to ensure on-time delivery of information with optimal use of resources Experience working with multiple data query models Relational, key-value, graph, document, full-text search Maintaining and evolving shared query content through source code management practices Natural curiosity and an interest in Threat Detection, Incident Response, Fraud, and/or Threat Intel problem space and the desire to be exposed to and develop these skill areas while serving in a development-focused role You have experience taking a periodic on-call rotation in a distributed team Publicly released tools or modules or open source contributions You have experience with some or all of these : Python, Go or experience with other languages and willingness to learn Terraform, Vault, Packer AWS, GCP, Azure AWS EC2, Lambda, Step Functions, ECR/ECS/EKS, S3 Logging Infrastructure and ETL Pipelines - fluentd, logstash, vector, kafka, kinesis or similar CI/CD - Building pipelines involving Jenkins, CircleCI, GH Actions, etc.. Solid foundation of Linux and exposure linux in cloud provider environments #LI-Remote Individual pay within the range will be determined based on job related-factors such as skills, experience, and education or training. The base pay range for this role in the SF Bay Area / NYC area is: $119,000-$140,000 USD The base pay range for this role in Seattle Metro, Denver / Boulder Metro, New York (excluding NYC), Washington D.C., or California (excluding SF Bay Area) is: $109,100-$128,300 USD The base pay range for this role in Colorado (excluding Denver / Boulder Metro) and Washington (excluding Seattle Metro) is: $99,200-$116,700 USD
Required
Preferred
Job Industries
Other
About the team We're looking for talented Data Engineers to join our Threat Detection and Response Team (TDR). This team will help defend HashiCorp by enhancing strategic detection, response, and prevention patterns across all of our products and the enterprise. This person will be responsible for expanding and maturing our approach to delivering visibility across all major cloud providers to ensure we have an accurate record of actions performed across each layer of our technology stacks. About Us HashiCorp is a fast-growing organization that solves development, operations, and security challenges in infrastructure so organizations can focus on business-critical tasks. We build tools to ease these decisions by presenting solutions that span the gaps. Our tools manage both physical machines and virtual machines, Windows, and Linux, SaaS and IaaS, etc. Our open source software is used by millions of users to provision, secure, connect, and run any infrastructure for any application. The Global 2000 uses our enterprise software to accelerate application delivery and drive innovation through software.
What you'll do (responsibilities) As a member of our Security team, you'll be responsible for ensuring we have the best practices implemented across our multi-cloud environment. You will partner with engineering and other stakeholders to define and drive secure by default environments supporting our products and the enterprise. We're heavily invested in tooling and automation, and the ability to continually improve these areas will be key to success as we scale our environments to meet customer demand. Engineering at HashiCorp is largely a remote team. While prior experience working remotely isn't required, we are looking for team members who perform well given a high level of independence and autonomy. HashiCorp embraces diversity and equal opportunity. We are committed to building a team that represents a variety of backgrounds, perspectives, and skills. We believe the more inclusive we are, the better our company will be. What you'll need (basic qualifications) 2+ years in an engineering role focused on large scale data collection in the cloud, using cloud-native tooling Working knowledge of batch or streaming data processing pipelines Collect, Normalize, Tag, Enrich Windowing and time series transformation Working knowledge of patterns of information retrieval and optimizing query workload Developing aggregates, views, summaries, and indices to accelerate access to data Profiling query workloads using query planner output or other diagnostic tooling to identify performance bottlenecks Profiling resource consumption to optimize expenditure on storage and transit Planning, dispatching, and monitoring query workload to ensure on-time delivery of information with optimal use of resources Experience working with multiple data query models Relational, key-value, graph, document, full-text search Maintaining and evolving shared query content through source code management practices Natural curiosity and an interest in Threat Detection, Incident Response, Fraud, and/or Threat Intel problem space and the desire to be exposed to and develop these skill areas while serving in a development-focused role You have experience taking a periodic on-call rotation in a distributed team Publicly released tools or modules or open source contributions You have experience with some or all of these : Python, Go or experience with other languages and willingness to learn Terraform, Vault, Packer AWS, GCP, Azure AWS EC2, Lambda, Step Functions, ECR/ECS/EKS, S3 Logging Infrastructure and ETL Pipelines - fluentd, logstash, vector, kafka, kinesis or similar CI/CD - Building pipelines involving Jenkins, CircleCI, GH Actions, etc.. Solid foundation of Linux and exposure linux in cloud provider environments #LI-Remote Individual pay within the range will be determined based on job related-factors such as skills, experience, and education or training. The base pay range for this role in the SF Bay Area / NYC area is: $119,000-$140,000 USD The base pay range for this role in Seattle Metro, Denver / Boulder Metro, New York (excluding NYC), Washington D.C., or California (excluding SF Bay Area) is: $109,100-$128,300 USD The base pay range for this role in Colorado (excluding Denver / Boulder Metro) and Washington (excluding Seattle Metro) is: $99,200-$116,700 USD
Required
Preferred
Job Industries
Other