Logo
Ninja Analytics, Inc.

Big Data Engineer w/ElasticSearch experience

Ninja Analytics, Inc., Washington, District of Columbia, us, 20022


Big Data Engineer w/ElasticSearch experience

Please note: this position is contingent upon award and is forecasted to be 100% remote.

We are looking for a highly skilled Big Data Engineer, with ElasticSearch experience, to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining large-scale data processing systems. You will work closely with data scientists, analysts, and other stakeholders to ensure that our data infrastructure supports the business needs and enables advanced analytics and data-driven decision-making.

Ninja Analytics isan information technology company that provides analytical and software development services to companies with AML, fraud detection, trade surveillance, and systems integration, documentation, and governance needs.We provide an exciting, challenging, and rewarding work experience. We select people based on ability, attitude, character, skill, and training withoutdiscrimination regarding age, color, credit, marital status, national origin, political belief,race, religious beliefs, or a disability that does not prohibit performance of essential jobfunctions. Ninja employees have competitive wages, paid time off and holidays, as well as health and welfare benefits. Our work environment is comfortable, orderly, safe, and often remote. Lastly, Ninja Analytics respects individual rights, and treats all employees with courtesy and consideration.

Key Responsibilities:

Data Architecture and Design:

Design and develop scalable, robust, and high-performance data pipelines and data storage solutions.Architect and implement data models and schemas optimized for both performance and scalability.Data Ingestion and Processing:

Develop and maintain ETL (Extract, Transform, Load) processes to ingest data from various sources, including structured and unstructured data.Implement data transformation and cleaning procedures to ensure data quality and consistency.Big Data Technologies:

Utilize big data technologies such as Hadoop, Spark, Kafka, and Hive to process and analyze large datasets.Optimize data processing workflows to ensure efficient resource utilization and minimize latency.Data Integration and Management:

Integrate data from multiple sources, including databases, APIs, and third-party data providers.Manage and maintain data lakes and data warehouses, ensuring data is organized and accessible for analysis.Collaboration and Communication:

Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet their needs.Provide technical guidance and support to team members and other departments as needed.Continuous Improvement:

Stay updated with the latest industry trends and best practices in big data engineering.Continuously improve data processing techniques and tools to enhance the overall efficiency and effectiveness of the data engineering team.Qualifications:

Education and Experience:Bachelor's or Master's degree in Computer Science, Engineering, or a related field.Proven experience as a Big Data Engineer or in a similar role, with a strong focus on big data technologies and data processing.Required Skills:

Knowledge of database systems, both SQL (e.g., PostgreSQL, MySQL) and NoSQL (e.g., Cassandra, MongoDB).Strong experience with big data technologies and frameworks (e.g., Hadoop, Spark, Kafka, Hive).Proficiency in programming languages such as Java, Scala, C+, or Python.Experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and their data services.Proficiency with Linux operating systems.Nice to have skills:

Implement and maintain ElasticSearch clusters, ensuring data availability, reliability, and consistency across distributed environments.Integrate ElasticSearch with other big data tools and frameworks, such as Apache Kafka, Hadoop, and Spark, to streamline data processing pipelines.Integrating data components into Java and Python-based applications.Soft Skills:

Excellent analytical and problem-solving skills.Strong verbal and written communication skills.Ability to work effectively both independently and as part of a team.Strong organizational skills and the ability to manage multiple tasks and priorities.Preferred Qualifications:Experience with real-time data processing and stream processing frameworks.Knowledge of data warehousing solutions like Amazon Redshift, Google BigQuery, or Snowflake.Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes).Certification in big data or cloud technologies.

Security Clearance:

Selected applicants must be a US Citizen and able to obtain and maintain a Top Secret Security Clearance

Department Information Technology Locations Washington Remote status Hybrid Yearly salary $125,000 - $155,000