Salesforce, Inc.
Software Engineer (Big Data) - Senior/Lead/Principal
Salesforce, Inc., Palo Alto, California, United States, 94306
Note: By applying to the Software Engineering posting, recruiters and hiring managers across the organization hiring Software Engineers will review your resume. Our goal is for you to apply once and have your resume reviewed by multiple hiring teams.
Interested in joining a large scale data platform running on public and private cloud substrates? Salesforce is building out our Big Data Platform Services team to reinvigorate the way we architect, deliver, and operate the platforms and services that run in our own data centers and also in public clouds - at consumer web scale! We are looking to add software engineers and leads, with experience building and owning distributed services, who can step up and own big chunks of this vision.
Your Impact:
Working with Phoenix, HBase, MapReduce, Yarn, Kafka, Spark, Hive, Presto, or equivalent large-scale distributed systems technologies on a modern containerized deployment stack Become an Open Source contributor by working with teams that have PMCs and committers on various Apache projects Building Database services on AWS, GCP or other public cloud substrates. Eat, sleep, and breathe services. You have experience balancing live-site management, feature delivery, and retirement of technical debt Designing, developing, debugging, and operating resilient distributed systems that run across thousands of compute nodes in multiple data centers Participate in the team’s on-call rotation to address complex problems in real-time and keep services operational and highly available Required Skills:
A related technical degree required 4+ years backend software development experience Deep knowledge of programming languages: Java, C++, and/or Python Experience owning and operating multiple instances of a critically important service Experience with Agile development methodology and Test Driven Development Experience using telemetry and metrics to drive operational excellence Have strong, heartfelt opinions on CAP theorem, can sketch out four different consistency models on a single napkin and defend each of them, and understand Paxos, Raft, and Zookeeper at an implementation level
#J-18808-Ljbffr
Working with Phoenix, HBase, MapReduce, Yarn, Kafka, Spark, Hive, Presto, or equivalent large-scale distributed systems technologies on a modern containerized deployment stack Become an Open Source contributor by working with teams that have PMCs and committers on various Apache projects Building Database services on AWS, GCP or other public cloud substrates. Eat, sleep, and breathe services. You have experience balancing live-site management, feature delivery, and retirement of technical debt Designing, developing, debugging, and operating resilient distributed systems that run across thousands of compute nodes in multiple data centers Participate in the team’s on-call rotation to address complex problems in real-time and keep services operational and highly available Required Skills:
A related technical degree required 4+ years backend software development experience Deep knowledge of programming languages: Java, C++, and/or Python Experience owning and operating multiple instances of a critically important service Experience with Agile development methodology and Test Driven Development Experience using telemetry and metrics to drive operational excellence Have strong, heartfelt opinions on CAP theorem, can sketch out four different consistency models on a single napkin and defend each of them, and understand Paxos, Raft, and Zookeeper at an implementation level
#J-18808-Ljbffr