Logo
JLL Technologies

Senior Data Engineer

JLL Technologies, Chicago, Illinois, United States, 60290


As a Data Engineer, you will be responsible for:Develop systems that ingest, cleanse and normalize diverse datasets, develop data pipelines from various internal and external sources and build structure for previously unstructured data.Interact with internal colleagues and external professionals to determine requirements, anticipate future needs, and identify areas of opportunity to drive data development.Develop a good understanding of how data will flow & be stored through an organization across multiple applications such as CRM, Broker & Sales tools, Finance, HR, etc.Unify, enrich, and analyze a variety of data to derive insights and opportunities.Design & develop data management and data persistence solutions for application use cases leveraging relational and non-relational databases, enhancing our data processing capabilities.Develop POCs to influence platform architects, product managers, and software engineers to validate solution proposals and migrate.Develop a data lake solution to store structured and unstructured data from internal and external sources and provide technical guidance to help migrate colleagues to a modern technology platform.Contribute and adhere to CI/CD processes, development best practices, and strengthen the discipline in the Data Engineering Org.Sounds like you? To apply you need to be:

Experience and Education:7+ years’ overall work experience and bachelor’s degree in Information Science, Computer Science, Mathematics, Statistics, or a quantitative discipline in science, business, or social science.Hands-on engineer who is curious about technology, able to quickly adapt to change, and understands the technologies supporting areas such as Cloud Computing (AWS, Azure (preferred), etc.), Micro Services, Streaming Technologies, Network, Security, etc.3 or more years of active development experience as a data developer using Python-spark, Spark Streaming, Azure SQL Server, Cosmos DB/Mongo DB, Azure Event Hubs, Azure Data Lake Storage, Azure Search, etc.Design & develop data management and data persistence solutions for application use cases leveraging relational and non-relational databases, enhancing our data processing capabilities.Build, test, and enhance data curation pipelines integrating data from a wide variety of sources like DBMS, File systems, APIs, and streaming systems for various KPIs and metrics development with high data quality and integrity.Maintain the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensure high availability of the platform; monitor workload demands; work with Infrastructure Engineering teams to maintain the data platform; serve as an SME of one or more applications.Team player, reliable, self-motivated, and self-disciplined individual capable of executing on multiple projects simultaneously within a fast-paced environment working with cross-functional teams.5+ years of experience working with source code control systems and Continuous Integration/Continuous Deployment tools.Independent and able to manage, prioritize & lead workload.Technical Skills & Competencies:Data Structures and Algorithms.Experience in writing high-performance production-ready code.Azure infrastructure - Databricks and Azure Functions.Python libraries such as NumPy, SciPy, Pandas, etc.

#J-18808-Ljbffr