Dice
100% Remote Senior Data Engineer
Dice, Little Ferry, New Jersey, us, 07643
Dice is the leading career destination for tech experts at every stage of their careers. Our client, Jobot, is seeking the following. Apply via Dice today!
100% Remote Senior Data Engineer up to $160k base salaryThis Jobot Job is hosted by: Lucas Watson
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $120,000 - $160,000 per year
A bit about us:Our client, an innovation and technology focused upstream oil and gas enterprise, is seeking a Senior Data Engineer for a full time, direct hire, 100% remote role.
Why join us?Fully remote opportunityInnovation driven companyCutting edge technology, far ahead of the rest of the industryStable, profitable, and growing organization
Job DetailsAs the Senior Data Engineer at our client's oil and gas company, you will be responsible for developing, implementing, and maintaining their data infrastructure and pipelines to support the efficient and effective collection, storage, and analysis of large volumes of diverse data. You will work closely with cross-functional teams, including data scientists and analysts, to design and optimize data models and algorithms for complex data mining and predictive analytics projects. Additionally, you will be expected to lead a team of data engineers and provide technical guidance and expertise to ensure the integrity, security, and scalability of our client's data systems. Your in-depth knowledge of data management and ETL processes, along with your passion for innovation and emerging technologies, will be instrumental in unlocking the value of our client's data assets and driving data-driven decision making throughout their organization.
Technical Qualifications:Bachelor's degree in Computer Science, Engineering, or a related field. An advanced degree is a plus.Proven experience (5+ years) working as a Data Engineer, with a focus on designing, developing, and implementing data integration and processing solutions.Strong proficiency in programming languages such as Python, PySpark, or Scala, and expertise in distributed SQL and database technologies.Extensive hands-on experience with big data technologies and frameworks.Exceptional expertise in AWS serverless technologies, including Lambda, Step Functions, and API Gateway, with a track record of designing and implementing serverless data solutions.In-depth knowledge of AWS data services, including Amazon S3 for data storage, AWS Glue for data preparation, and Amazon Athena for consumption.Proficiency in building and optimizing data pipelines on AWS using services such as AWS Data Pipeline or AWS Glue.Familiarity with AWS streaming data services like Amazon Kinesis for real-time data processing.Strong understanding of data modeling, ETL processes, data normalization/denormalization, and master data management principles within an AWS serverless context.Experience with version control systems (e.g., Git) and code deployment strategies within AWS.Proven ability to work with infrastructure as code (IAC) tools like AWS CloudFormation or Terraform for managing serverless infrastructure.Excellent problem-solving and analytical skills, with the ability to translate complex business requirements into technical specifications within the AWS serverless framework.Knowledge of best practices for security and compliance in AWS, including IAM roles and policies.Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and present complex ideas related to AWS serverless solutions to non-technical stakeholders.Experience with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) within the AWS ecosystem is a plus.
Key Responsibilities:Develop and implement scalable and efficient data pipelines for capturing, processing, and storing large volumes of structured and unstructured data from various internal and external sources.Collaborate with cross-functional teams to understand business requirements, data needs, and challenges, and design comprehensive solutions to address them.Architect, build, and optimize data integration frameworks and platforms to ensure seamless data exchange between internal systems, external partners, and cloud-based services.Design and implement real-time and batch data processing systems to support analytics, reporting, and business intelligence initiatives.Establish data governance practices, data quality standards, and data security protocols to maintain the integrity, confidentiality, and availability of corporate data assets.Monitor and analyze data pipelines for performance, reliability, and efficiency, and proactively address any issues or bottlenecks.Stay up-to-date with emerging technologies, industry trends, and best practices in data engineering and leverage them to drive innovation and optimization within the organization.Lead and mentor junior data engineers, providing guidance and support in their professional growth and development.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
#J-18808-Ljbffr
100% Remote Senior Data Engineer up to $160k base salaryThis Jobot Job is hosted by: Lucas Watson
Are you a fit? Easy Apply now by clicking the "Apply Now" button and sending us your resume.
Salary: $120,000 - $160,000 per year
A bit about us:Our client, an innovation and technology focused upstream oil and gas enterprise, is seeking a Senior Data Engineer for a full time, direct hire, 100% remote role.
Why join us?Fully remote opportunityInnovation driven companyCutting edge technology, far ahead of the rest of the industryStable, profitable, and growing organization
Job DetailsAs the Senior Data Engineer at our client's oil and gas company, you will be responsible for developing, implementing, and maintaining their data infrastructure and pipelines to support the efficient and effective collection, storage, and analysis of large volumes of diverse data. You will work closely with cross-functional teams, including data scientists and analysts, to design and optimize data models and algorithms for complex data mining and predictive analytics projects. Additionally, you will be expected to lead a team of data engineers and provide technical guidance and expertise to ensure the integrity, security, and scalability of our client's data systems. Your in-depth knowledge of data management and ETL processes, along with your passion for innovation and emerging technologies, will be instrumental in unlocking the value of our client's data assets and driving data-driven decision making throughout their organization.
Technical Qualifications:Bachelor's degree in Computer Science, Engineering, or a related field. An advanced degree is a plus.Proven experience (5+ years) working as a Data Engineer, with a focus on designing, developing, and implementing data integration and processing solutions.Strong proficiency in programming languages such as Python, PySpark, or Scala, and expertise in distributed SQL and database technologies.Extensive hands-on experience with big data technologies and frameworks.Exceptional expertise in AWS serverless technologies, including Lambda, Step Functions, and API Gateway, with a track record of designing and implementing serverless data solutions.In-depth knowledge of AWS data services, including Amazon S3 for data storage, AWS Glue for data preparation, and Amazon Athena for consumption.Proficiency in building and optimizing data pipelines on AWS using services such as AWS Data Pipeline or AWS Glue.Familiarity with AWS streaming data services like Amazon Kinesis for real-time data processing.Strong understanding of data modeling, ETL processes, data normalization/denormalization, and master data management principles within an AWS serverless context.Experience with version control systems (e.g., Git) and code deployment strategies within AWS.Proven ability to work with infrastructure as code (IAC) tools like AWS CloudFormation or Terraform for managing serverless infrastructure.Excellent problem-solving and analytical skills, with the ability to translate complex business requirements into technical specifications within the AWS serverless framework.Knowledge of best practices for security and compliance in AWS, including IAM roles and policies.Strong communication and interpersonal skills, with the ability to effectively collaborate with cross-functional teams and present complex ideas related to AWS serverless solutions to non-technical stakeholders.Experience with containerization technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes) within the AWS ecosystem is a plus.
Key Responsibilities:Develop and implement scalable and efficient data pipelines for capturing, processing, and storing large volumes of structured and unstructured data from various internal and external sources.Collaborate with cross-functional teams to understand business requirements, data needs, and challenges, and design comprehensive solutions to address them.Architect, build, and optimize data integration frameworks and platforms to ensure seamless data exchange between internal systems, external partners, and cloud-based services.Design and implement real-time and batch data processing systems to support analytics, reporting, and business intelligence initiatives.Establish data governance practices, data quality standards, and data security protocols to maintain the integrity, confidentiality, and availability of corporate data assets.Monitor and analyze data pipelines for performance, reliability, and efficiency, and proactively address any issues or bottlenecks.Stay up-to-date with emerging technologies, industry trends, and best practices in data engineering and leverage them to drive innovation and optimization within the organization.Lead and mentor junior data engineers, providing guidance and support in their professional growth and development.
Interested in hearing more? Easy Apply now by clicking the "Apply Now" button.
#J-18808-Ljbffr