Logo
Waypoint Human Capital

Cloud Software Engineer

Waypoint Human Capital, Annapolis, MD


Position Type: On-Site
Location: Annapolis Junction, MD
Clearance: Active TS/SCI w/ Poly

Description:
Waypoint's client is seeking an experienced Cloud Software Engineer to join their platform team supporting a large analytic cloud repository. The successful candidate will have experience with large Hadoop and Accumulo-based clusters and familiarity with open-source technologies. This role will primarily focus on supporting Accumulo and involves working with the Data Distribution (DDS) team to provide Data Flow Management. The candidate should have prior data flow, or NiFi flow engineering experience and knowledge of data governance processes, security, compliance, and catalog labeling is highly desired.

Responsibilities:
  • Manage and support large Hadoop and Accumulo-based clusters.
  • Provide data flow management for the Data Distribution (DDS) team.
  • Perform requirements analysis, software development, installation, integration, evaluation, enhancement, maintenance, testing, and problem diagnosis/resolution.
  • Contribute to and support open-source applications.
  • Provide on-call support as required.
  • Learn and integrate new open-source technologies as needed.
Requirements:
  • Active TS/SCI clearance with full scope polygraph is required.
  • At least eight (8) years of experience in software development/engineering.
  • Experience in Java programming for distributed systems, including networking and multi-threading.
  • Bachelor's degree in a technical discipline from an accredited college or university.
  • Five (5) years of additional software engineering experience may be substituted for a bachelor's degree.
  • Hadoop/Cloud Developer Certification is required.
  • Apache Hadoop, Apache Accumulo, Apache Zookeeper, Apache NiFi.
  • Linux operating system monitoring and tuning, and Linux OS-level virtualization.
  • HAProxy.
  • Experience as a committer/contributor to open-source applications.
  • Agile development methodologies.
Desired:
  • Knowledge of Linux OS development, monitoring, and tuning.
  • Experience with Prometheus and Grafana for monitoring and visualization.
  • Familiarity with Kafka for real-time data processing.
  • Experience working with CentOS.
  • Experience with data governance processes (DMRs, DLMS, DSW, DART) and security compliance.
  • Proficient in managing and troubleshooting Hadoop and Accumulo clusters.