Overview

Job title: Data Engineer
Location: Sao Paulo, Brazil.
Work mode: Hybrid
Overall experience in IT: 3-5 Years
Relevant in Data Engineer: 2-4 Years
Role Value Proposition:
At LATAM Data Hub (LDH), our mission is to build the next generation data lakehouse for MetLife, and to help deploy it across various LATAM countries. We have developed a world-class, cloud-native platform, to enable reporting, analytics, data supply pipeline, and real time supply of the data to various digital and non digital channels. The platform leverages cutting-edge, open source and proprietary technologies to create a highly configurable system that can be adapted to individual market needs quickly, and at a low cost. The platform runs in a fully containerized, elastic cloud environment, and is designed to scale to serve millions of users.
We are looking for a Data Engineer with a track record of designing and implementing large and complex technology projects at a global scale. The ideal candidate would… have a solid foundation in hands-on ETL and analytical warehouse development, understand complexities in managing end to end data pipelines and in-depth knowledge of data governance and data management concepts. To be successful in this role, the candidate would require a balance of product-centric technical expertise and navigating complex deployments with multiple systems and teams. This role requires interaction with technical staff and senior business and IT partners around the world. This position is also responsible for ensuring operational readiness by incorporating configuration management, exception handling, logging, end-to-end batch and real-time data pipeline operationalization for getting data, managing and processing into the hub.
Required:
• ETL and data warehousing development experience.
• Experience designing ETL and data lakes on cloud or big data based platforms
• Demonstrated experience with implementing, and deploying scalable and performant data hubs at global scale
• Demonstrated experience in cutting-end database technologies, and cloud services such as Azure, GCP, Azure, Data Bricks or SnowFlake; deep experience in technologies such as Spark (Scala/python/Java), ADLS, Kafka, SQL, Synapse SQL, Cosmos DB, Graph DBs
• Hands on expertise in: Implementing analytical data stores in Azure platform using ADLS/Azure data factory /Data Bricks and Cosmos DB (mongo/graph API)
• Strong analytic skills related to working with unstructured datasets
• Strong SQL knowledge and data analysis skills for data anomaly detection and data quality assurance.
• Eagerness to learn new technologies on the fly and ship to production
• Demonstrated experience in Configuration Management, DevOps automation
• Excellent communication skills: Demonstrated ability to explain complex technical content to both technical and non-technical audiences
• Experience working enterprise complex large scale, multi-team, programs in agile development methodologies
• Experience in Solution implementation, performance testing and tuning; ADLS, Synapse SQL database or GCP Big Query and GCS management and Performance tuning – Partitioning / Bucketing.
• Bachelor’s degree in computer science or related field.
Preferred:
Working knowledge of English
• Hybrid mode of work (Client may request presencial work from one or two years from now & availability to go to client office at least once a month, if required)
“Kindly share your updated CV on WhatsApp (+91)909-258-2252 to sikkander.f@tcs.com for more information