Azure DataEngineer

Hourly rate: members only

Availability: members only

Willingness to travel: Within the UK

Professional status: Employee

Last updated: 9 Sep 2022

Total work experience:

Language skills: English,

Personal summary

• Over all 13 years of experience in a variety of industries including 3 years of experience in Azure Data Engineer, (Microsoft Azure Data flows, Azure Data factory, Azure DataBricks,Synapse,SQLDB) and 10 years of experience in ETL tool informatica,SSIS,Talend,Big data Technologies • Over all 2 years of Onsite Experience as an onsite coordinator in Belgium and United Kingdom • Hands on experience on multiple domains such as Retail , Banking, Mortgage and ERP. • Migrated on premise data warehouse to Azure synapse using Azure data factory/Datalakegen2. • Loaded data from on premises SAPODP to Azure synapse using OData services API • Loaded Transactional data from on premises SAP ODP using XtractIS component in Azure SSIS package • Created keyVaults, secret keys for security using azure active directories. • Created Nightly backup for Delta Tables using Cloning Technique using Azure Databricks. • Implemented automated Delta mechanism for delta tables in Azure Databricks. • Created dynamic code to pull incremental data from on premises to Azure Data lake gen2. • Created CI/CD Pipelines in azure Devops • Loaded the transactional data from on premise to Azure Synapse • Automated the process of parquet file generation using azure data bricks • Have good knowledge on Azure Devops • In - depth knowledge of Apache Hadoop Architecture (1.x and 2.x) and Apache Spark 2.x Architecture. • Hands on experience in Hadoop Ecosystem components such as Hadoop, Spark, HDFS, YARN, Hive, Sqoop, Map Reduce, SCALA, Pig, OOZIE, Kafka • Experience in importing and exporting data from RDBMS to Hadoop and HIVE using SQOOP. • Experience in transporting, and processing real-time stream data using Kafka • Experience in implementing OLAP multi-dimensional cube functionality using Azure Synapse • Experience in writing Spark transformations and actions using Spark SQL in Scala. • Experience in writing HQL queries in Hive Data warehouse. • Modification and performance tuning of HIVE scripts, resolving automation job failure issues and reloading the data into HIVE Data Warehouse if needed. • Experienced in creating detailed BRDs (business requirement documents) and design documents to conduct end-to-end analysis of new user stories and applications during all release cycles in agile sprints. • Proactively worked with various delivery teams for smooth deployment of applications, providing end user acceptance support to cross platform teams of developers and testers. • Skilled in Data modelling, SQL scripting and predictive data modelling. • Handled multiple data warehouse project including design, development and Unit testing and integration testing, Support. • Analyzed the data by doing data profiling using informatica analyst tool • Performed Address matching and Standardization, clustering using informatica Developer tool

Language skills

English

Fluent knowledge