Logo
Adobe

Senior Software Engineer

Adobe, San Jose, California, United States, 95199


Our CompanyChanging the world through digital experiences is what Adobe’s all about. We give everyone—from emerging artists to global brands—everything they need to design and deliver exceptional digital experiences! We’re passionate about empowering people to create beautiful and powerful images, videos, and apps, and transform how companies interact with customers across every screen.We’re on a mission to hire the very best and are committed to creating exceptional employee experiences where everyone is respected and has access to equal opportunity. We realize that new ideas can come from everywhere in the organization, and we know the next big idea could be yours!The Opportunity

Adobe Experience Platform (AEP) enables businesses to deliver the right experience at the right time to their customers. The Profile Stores layer is one of the key AEP services built as a multi-cloud multi-tenant service to support 1000s of customers, providing the ability to manage a high volume data pipeline and storage layer for real-time customer profiles. Customer profiles are built on complex data models spanning both record structures supporting upserts at scale as well as timeseries analytics event like data that is ingested at very high volumes. With the need to support 1000s of customers ingesting and storing PBs of data, resiliency, data correctness, scalability and efficiency are paramount. This is a great opportunity for engineers to solve extremely interesting challenges of scale and build core services that are used by all Adobe Digital Experience solutions. As part of building these services, you will work with an exceptionally talented and collaborative team, tackle complex data challenges and build highly performant services on various open-source technologies.What you'll Do

Collaborate with a team of engineers & product managers in building high-performance data ingestion pipelines and data store to serve the use cases of Segmentation and Activation.Own responsibility for design and implementation of key components of ingesting and maintaining petabyte of Profile data.Develop systems to support high volume data ingestion pipelines handling both streaming and batch processing.Leverage popular file and table formats to design storage models to support the required ingestion volumes and data access patterns.Explore tradeoffs across different formats and schema layouts driven by workload and application characteristics.Deploy production services and iteratively improve them based on customer feedback.Follow Agile methodologies using industry leading CI/CD pipelines.Participate in architecture, design & code reviews.What you need to succeed

M.S. in Computer Science or a related field or equivalent experiences required.Experience with Distributed processing systems like Apache Spark, Hadoop Stack, or Apache Kafka.Experience with Data Lake cloud storages like Azure Data Lake Storage or AWS (Amazon Web Services) S3.Understanding of file formats like Apache Parquet and table formats such as Databricks Delta, Apache Iceberg or Apache Hudi is preferred.Understanding of NoSQL databases like Apache HBase, Cassandra, Mongo, or Azure Cosmos DB is a plus.Practical experience in building resilient data pipelines at scale is preferred.Strong programming skills with extensive experience in Java or Scala.Leadership skills to collaborate and drive cross-team efforts.Excellent communication skills.Adaptable to evolving priorities, accepting challenges outside one's comfort zone, learning new technologies, and delivering viable solutions within defined time boundaries.Ability to think through solutions from a short term and long-term lens in an iterative development cycle.

#J-18808-Ljbffr