Logo
Idaho State Job Bank

Principal Software Engineer - Big Data Processing

Idaho State Job Bank, Boise, Idaho, United States, 83708


Principal Software Engineer - Big Data Processing at Oracle in Boise, Idaho, United States Job Description Job Description Our Team Building off our Cloud momentum, Oracle has formed a new organization - Oracle Health Data Intelligence. This team will focus on product development and product strategy for Oracle Health, while building out a complete platform supporting modernized, automated healthcare. This is a net new line of business, constructed with an entrepreneurial spirit that promotes an upbeat and creative environment. We are unencumbered and will need your contribution to make it an elite engineering center with the focus on excellence. Oracle Health Data Intelligence has a rare opportunity to play a critical role in how Oracle Health products impact and reinvent the healthcare industry by transforming how healthcare and technology intersect. You will have the opportunity to: + Reach billions of people with our products & services + Create technology in which truly impacts the world + Ability to have immediate impact on developing technology + Unlimited growth potential with inspiring work + Work with the best minds in the industry + Enjoy working in an open, diverse, and productive environment Career Level - IC4 Responsibilities About the Job Oracle Health Data Intelligence is growing and looking for a Principal Software Engineer to join the Oracle Health Data Intelligence team What You'll Do As a member of the software engineering division, you will apply intermediate to advanced knowledge of software architecture to perform software development tasks associated with developing, debugging, or designing software applications or operating systems according to provided design specifications. Build enhancements within an existing software architecture and suggest improvements to the architecture. Work involves problem solving, understanding, and applying company policies and processes. + High level of fluency with Java, C++, C#, Python, etc. + Experience in working with bigdata processing tools like spark, hive, presto, etc. + Experience working with distributed systems. + Familiarity with technologies and design concepts around Big Data Processing and Relational Databases such as: ETL, Hadoop Ecosystem, structured data, SQL schemas and queries etc. + Experience building cloud-based platforms that enable application design like AWS, GCP, Azure, OCI, etc. + Experience working with large, enterprise, scalable applications + Experience building and maintaining RESTful APIs + Expertise in writing well-modularize To view full details and how to apply, please login or create a Job Seeker account