iQuasar
Senior Data Engineer
iQuasar, Laurel, Maryland, United States, 20724
iQuasar LLC is seeking a
Senior Data Engineer
in
Laurel Maryland . We strive to provide the next generation of cutting-edge technologies. Our growth means exciting career opportunities for talented professionals in engineering, software development, and other key areas. We offer competitive compensation and benefits including Health, Vision, and Dental Insurance, a matching 401k plan, and other benefits given below, excellent training, and a vibrant working environment. Our employees are exceptional, giving us a competitive advantage by innovating solutions with a strong sense of mission and integrity.
One of our clients in
Laurel Maryland,
needs a Senior Data Engineerfor a
contract position.•
Position:
Senior Data Engineer•
Clearance : No•
Location:
14501 Sweitzer Lane, Laurel, MD 20707
Job Description
The Senior Data Engineer with expertise in Azure Synapse Analytics and Microsoft Fabric will design, develop, and implement scalable data solutions to support analytics and reporting needs. The role involves creating, optimizing, and managing data pipelines to efficiently move and transform data across the Azure ecosystem. The candidate will be responsible for setting up and managing data lakes to store large volumes of structured and unstructured data, ensuring high availability and security. They will collaborate with cross-functional teams to gather data requirements and create efficient, scalable architectures. Strong experience in ETL development, data modeling, and cloud technologies like Azure Data Factory, Azure Data Lake, and Synapse Analytics is essential. The candidate will also ensure data quality, security, and compliance with governance standards.
Duties / Responsibilities
Create and optimize complex data pipelines using Azure Data Factory, Synapse Analytics, and other Azure tools to extract, transform, and load data efficiently.Implement and maintain Azure Data Lake solutions to store large volumes of structured and unstructured data, ensuring scalability, performance, and security.Integrate data from various sources, including relational databases, NoSQL databases, APIs, and flat files, into the Azure environment for analysis and reporting.Design and develop robust data architectures, optimizing for performance and scalability in Azure Synapse Analytics and Azure Data Lake environments.Develop efficient ETL/ELT processes using Azure Data Factory or other Azure tools to ensure timely and accurate data loading and transformation.Ensure data pipelines run smoothly by monitoring, troubleshooting, and resolving issues to minimize downtime and data inconsistencies.Continuously optimize data pipelines and query performance, especially within Azure Synapse to handle large data sets and complex transformations efficiently.Work closely with data scientists, analysts, and business teams to understand data requirements and deliver scalable data solutions that support analytics needs.Implement and enforce security best practices, ensuring data lakes, pipelines, and analytics solutions comply with Azure security standards and data governance policies.Design and implement logical and physical data models that support high-performance querying and reporting within Azure Synapse.Implement data quality checks, data validation processes, and error handling within data pipelines to ensure accuracy and consistency of data.Ensure adherence to data governance frameworks, managing data lineage, metadata, and ensuring compliance with organizational and regulatory requirements.Implement data partitioning and indexing strategies to improve query performance within data lakes and Synapse.Automate data ingestion, transformation, and processing tasks to ensure efficient and scalable data workflows within the Azure environment.Create and maintain detailed documentation for data architectures, pipelines, processes, and data models, ensuring transparency and ease of maintenance.Provide technical guidance and mentorship to junior data engineers, sharing best practices and ensuring adherence to high-quality engineering standards.Monitor resource utilization in Azure environments, planning for future data growth and ensuring efficient use of cloud resources.Strong knowledge of Medallion architecture.Experience in setting up parquet and delta file structures.Experience in working with non-structured data sources.Strong knowledge of consuming and exposing data from various data sources like XML, JSON, etc.Continuously stay informed on the latest features and best practices in Azure Synapse Analytics, Microsoft Fabric, and the Azure ecosystem, implementing improvements as needed.Strong knowledge of Python for creating and scheduling data pipelines.Implement real-time data ingestion and processing pipelines using technologies like Azure Stream Analytics, and Event Hubs.Design and implement a data mesh architecture to support decentralized data ownership and self-service data infrastructure, ensuring scalable and flexible data management across the organization.Architect and manage multi-cloud data solutions, integrating data across different cloud platforms (e.g., AWS, OCI) with Azure Synapse for a unified data and analytics ecosystem.Design and manage hybrid data architectures that integrate on-premises data centers with Azure cloud environments, ensuring seamless data movement and synchronization between cloud and on-prem systems.Utilize advanced data cataloging tools such as Azure Purview to create an enterprise-wide data catalog, enabling efficient data discovery and usage across various teams.Create and automate end-to-end machine learning pipelines that integrate data ingestion, feature engineering, model training, and deployment using Azure ML, Python (sci-kit-learn, TensorFlow, PyTorch), and Azure Synapse Analytics.Utilize Python-based data augmentation techniques or synthetic data generation (e.g., GANs or SMOTE) to enrich datasets for machine learning training, especially in cases where data is limited or imbalanced.Preferred Experience/Qualification/Knowledge Skills
a. Education : Bachelor's degree in information systems, Computer Science, or related scientific or technical field and three (5) years minimum of relevant experience.
b. General Experience :
Work Experience : 5+ years of experience designing and implementing data solutions and creating data pipelines at enterprise-level applications.Industry Knowledge : Preferred to have experience in water and wastewater industry understanding of oracle utility applications.Project Experience:
Demonstrated experience working on large-scale data projects in diverse team environments with a focus on analytics, business intelligence, and enterprise systems.c. Specialized Experience
Data Modelling: Extensive experience with data modeling and database design.Enterprise Analytics: Proven expertise in implementing enterprise-wide analytics and business intelligence solutions, including data integration from multiple systems into a single data repository.d. Skillset
Database & Data Structures:
Deep understanding of database design principles, SQL, PL/SQL, and Oracle database management systems, including performance optimization and troubleshooting.Data Governance & Quality : Familiarity with data governance frameworks, ensuring data integrity, quality, and security within an enterprise context.Data Lakes : Strong experience in creating data lakes and data warehouses.Python:
Strong knowledge of writing Python code to create and manage data pipelinesCommunication & Collaboration : Excellent verbal and written communication skills, with the ability to work closely with stakeholders to translate business needs into technical solutions.Problem-solving : Strong analytical skills and problem-solving abilities, especially when working with large, complex datasets.
If you are interested in this position, please send me a copy of your latest resume at irfana.reshi@iquasar.com with the information requested below: Also, please let me know what time/number is best to call to discuss this great opportunity. In case you are not interested in this position, or this is not a right fit for you, please feel free to share this opportunity with your friends/networks or anyone you know who may be interested in this position. Thank you!• Availability to start a new job.• Best Rates• Contact #
Please don't hesitate to contact me for any question (s) you may have. All employment is decided on the basis of qualifications, merit, and business need.
Regards,
Irfana Reshi
Senior Recruitment Professional
iQuasar, LLC
Cleared Recruitment | Proposal Development | Technology
irfana.reshi@iQuasar.com
Direct: 17039360644
Main: (703) 962-6001 Ext. 538
www.iQuasar.com
An Equal Opportunity Employer:
iQuasar LLC is proud to be an Equal Employment Opportunity Employer. We do not discriminate based on race, religion, color, national origin, political affiliation, sex, sexual orientation, gender identity, age, marital/parental /veteran status, disability, genetic information, membership in an employee organization, retaliation, military service, other non-merit factors, or any other applicable characteristics protected by law.
Senior Data Engineer
in
Laurel Maryland . We strive to provide the next generation of cutting-edge technologies. Our growth means exciting career opportunities for talented professionals in engineering, software development, and other key areas. We offer competitive compensation and benefits including Health, Vision, and Dental Insurance, a matching 401k plan, and other benefits given below, excellent training, and a vibrant working environment. Our employees are exceptional, giving us a competitive advantage by innovating solutions with a strong sense of mission and integrity.
One of our clients in
Laurel Maryland,
needs a Senior Data Engineerfor a
contract position.•
Position:
Senior Data Engineer•
Clearance : No•
Location:
14501 Sweitzer Lane, Laurel, MD 20707
Job Description
The Senior Data Engineer with expertise in Azure Synapse Analytics and Microsoft Fabric will design, develop, and implement scalable data solutions to support analytics and reporting needs. The role involves creating, optimizing, and managing data pipelines to efficiently move and transform data across the Azure ecosystem. The candidate will be responsible for setting up and managing data lakes to store large volumes of structured and unstructured data, ensuring high availability and security. They will collaborate with cross-functional teams to gather data requirements and create efficient, scalable architectures. Strong experience in ETL development, data modeling, and cloud technologies like Azure Data Factory, Azure Data Lake, and Synapse Analytics is essential. The candidate will also ensure data quality, security, and compliance with governance standards.
Duties / Responsibilities
Create and optimize complex data pipelines using Azure Data Factory, Synapse Analytics, and other Azure tools to extract, transform, and load data efficiently.Implement and maintain Azure Data Lake solutions to store large volumes of structured and unstructured data, ensuring scalability, performance, and security.Integrate data from various sources, including relational databases, NoSQL databases, APIs, and flat files, into the Azure environment for analysis and reporting.Design and develop robust data architectures, optimizing for performance and scalability in Azure Synapse Analytics and Azure Data Lake environments.Develop efficient ETL/ELT processes using Azure Data Factory or other Azure tools to ensure timely and accurate data loading and transformation.Ensure data pipelines run smoothly by monitoring, troubleshooting, and resolving issues to minimize downtime and data inconsistencies.Continuously optimize data pipelines and query performance, especially within Azure Synapse to handle large data sets and complex transformations efficiently.Work closely with data scientists, analysts, and business teams to understand data requirements and deliver scalable data solutions that support analytics needs.Implement and enforce security best practices, ensuring data lakes, pipelines, and analytics solutions comply with Azure security standards and data governance policies.Design and implement logical and physical data models that support high-performance querying and reporting within Azure Synapse.Implement data quality checks, data validation processes, and error handling within data pipelines to ensure accuracy and consistency of data.Ensure adherence to data governance frameworks, managing data lineage, metadata, and ensuring compliance with organizational and regulatory requirements.Implement data partitioning and indexing strategies to improve query performance within data lakes and Synapse.Automate data ingestion, transformation, and processing tasks to ensure efficient and scalable data workflows within the Azure environment.Create and maintain detailed documentation for data architectures, pipelines, processes, and data models, ensuring transparency and ease of maintenance.Provide technical guidance and mentorship to junior data engineers, sharing best practices and ensuring adherence to high-quality engineering standards.Monitor resource utilization in Azure environments, planning for future data growth and ensuring efficient use of cloud resources.Strong knowledge of Medallion architecture.Experience in setting up parquet and delta file structures.Experience in working with non-structured data sources.Strong knowledge of consuming and exposing data from various data sources like XML, JSON, etc.Continuously stay informed on the latest features and best practices in Azure Synapse Analytics, Microsoft Fabric, and the Azure ecosystem, implementing improvements as needed.Strong knowledge of Python for creating and scheduling data pipelines.Implement real-time data ingestion and processing pipelines using technologies like Azure Stream Analytics, and Event Hubs.Design and implement a data mesh architecture to support decentralized data ownership and self-service data infrastructure, ensuring scalable and flexible data management across the organization.Architect and manage multi-cloud data solutions, integrating data across different cloud platforms (e.g., AWS, OCI) with Azure Synapse for a unified data and analytics ecosystem.Design and manage hybrid data architectures that integrate on-premises data centers with Azure cloud environments, ensuring seamless data movement and synchronization between cloud and on-prem systems.Utilize advanced data cataloging tools such as Azure Purview to create an enterprise-wide data catalog, enabling efficient data discovery and usage across various teams.Create and automate end-to-end machine learning pipelines that integrate data ingestion, feature engineering, model training, and deployment using Azure ML, Python (sci-kit-learn, TensorFlow, PyTorch), and Azure Synapse Analytics.Utilize Python-based data augmentation techniques or synthetic data generation (e.g., GANs or SMOTE) to enrich datasets for machine learning training, especially in cases where data is limited or imbalanced.Preferred Experience/Qualification/Knowledge Skills
a. Education : Bachelor's degree in information systems, Computer Science, or related scientific or technical field and three (5) years minimum of relevant experience.
b. General Experience :
Work Experience : 5+ years of experience designing and implementing data solutions and creating data pipelines at enterprise-level applications.Industry Knowledge : Preferred to have experience in water and wastewater industry understanding of oracle utility applications.Project Experience:
Demonstrated experience working on large-scale data projects in diverse team environments with a focus on analytics, business intelligence, and enterprise systems.c. Specialized Experience
Data Modelling: Extensive experience with data modeling and database design.Enterprise Analytics: Proven expertise in implementing enterprise-wide analytics and business intelligence solutions, including data integration from multiple systems into a single data repository.d. Skillset
Database & Data Structures:
Deep understanding of database design principles, SQL, PL/SQL, and Oracle database management systems, including performance optimization and troubleshooting.Data Governance & Quality : Familiarity with data governance frameworks, ensuring data integrity, quality, and security within an enterprise context.Data Lakes : Strong experience in creating data lakes and data warehouses.Python:
Strong knowledge of writing Python code to create and manage data pipelinesCommunication & Collaboration : Excellent verbal and written communication skills, with the ability to work closely with stakeholders to translate business needs into technical solutions.Problem-solving : Strong analytical skills and problem-solving abilities, especially when working with large, complex datasets.
If you are interested in this position, please send me a copy of your latest resume at irfana.reshi@iquasar.com with the information requested below: Also, please let me know what time/number is best to call to discuss this great opportunity. In case you are not interested in this position, or this is not a right fit for you, please feel free to share this opportunity with your friends/networks or anyone you know who may be interested in this position. Thank you!• Availability to start a new job.• Best Rates• Contact #
Please don't hesitate to contact me for any question (s) you may have. All employment is decided on the basis of qualifications, merit, and business need.
Regards,
Irfana Reshi
Senior Recruitment Professional
iQuasar, LLC
Cleared Recruitment | Proposal Development | Technology
irfana.reshi@iQuasar.com
Direct: 17039360644
Main: (703) 962-6001 Ext. 538
www.iQuasar.com
An Equal Opportunity Employer:
iQuasar LLC is proud to be an Equal Employment Opportunity Employer. We do not discriminate based on race, religion, color, national origin, political affiliation, sex, sexual orientation, gender identity, age, marital/parental /veteran status, disability, genetic information, membership in an employee organization, retaliation, military service, other non-merit factors, or any other applicable characteristics protected by law.