PROFESSIONAL EXPERIENCE:
Company: Microsoft
Role: Support Escalation Engineer
Project: CSS BigData Azure Rapid Response Team
Aug 2019 –Till date
Description:
Azure Rapid Response: Azure Rapid Response is business critical support for Microsoft Azure top 100 Customers providing support on Big Data Technologies like Azure Data Factory, Azure Data Lake Store, Azure Data Lake Analytics and Azure Strean analytics. Customers with ARR subscriptions will create (CritSit, Sev A/B/C ) service requests and we As ARR engineers help customers to identify the issue and providing solution in timely manner.
- Troubleshooting of the issues by checking the audit logs and also doing a repro of the issue in our Internal azure subscriptions and giving the next steps to the customer for mitigation steps.
- Work with end customers to identify and resolve performance issues and going on calls with customer whenever required to understand the root causes of the issue.
- Experience in building solutions for Microsoft Customers on technologies like Azure HDInsight, Azure Databricks, Azure synapse, Azure Fabric, Azure Machine Learning, Azure Data Factory, Azure Data Lake, Azure Stream analytics, Azure Cosmos DB, by jump on a call immediately and thoroughly understanding customer requirement and assisting customers on technical design and architecture .
- Experience on Migrating SQL database to Azure data Lake, Azure data lake Analytics, Azure SQL Database, Data Bricks and Azure SQL Data warehouse and controlling and granting database access and Migrating On premise databases to Azure Data lake store using Azure Data factory.
- Experience on Work on cases opened by end customers of Microsoft for any issues related to products like Logic Apps connectors for Azure Data factory and Azure SQL Server.
- Experience in Work with support teams, Account teams, Azure Customer Engineer, product engineering teams and other stakeholders to ensure satisfactory customer experience.
- Experience in Handling complex escalations on customer issues from the support or field teams. Conducts impact analysis to determine the priority of the escalations and help the customers to resolve their issue immediately.
- Expertise in using Cloud based managed services for data warehousing/analytics in Microsoft Azure (Azure Data Lake Analytics, Azure Data Lake Storage, Azure Data Factory, Azure SQL Server, Stream Analytics, HDInsight, DATABRICKS)
- Experience in Troubleshooting and debugging issues related to Azure Data Factory, Azure Data Lake and reproducing those scenarios in Azure Portal and making changes in JSON code to reiterate the issue. Attending daily triage with the product teams and SME of various products.
- Great exposure to networking troubleshooting with Azure data and AI products.
- Experience in writing in SQL queries and helping customers to resolve their issues related to Azure SQL Server and identifying the bottle necks while migrating the data from on prem to Azure SQL Server.
- Experience in Optimizing query performance in Azure Cosmos DB for Microsoft customers and designing and implementing change feeds in Azure Cosmos DB
- Experience in developing applications that use the API for Azure Cosmos DB for NoSQL, for Microsoft customers and write efficient SQL queries for the API.
- Created Pipelines in ADF using Linked Services/Datasets/Pipeline/ to Extract, Transform, and load data from different sources like Azure SQL, Blob storage, Azure SQL Data warehouse, write-back tool and backwards.
- Troubleshooting Customer issues using Azure Support Center by checking CPU usage for copy activity runs using Pipeline run ID’s and escalating issues by creating ICM’s with Engineering team and following up engineering team to provide an update to Customers.
- Developed multiple pipelines using Azure Data Factory from various sources for ETL activities like copy activities and customer activities.
- providing resolution to the customer in timely manner taking care of the FDR and FCR process
- Collaborating with other teams (Azure Networking, Azure Data Bricks) related to Customer cases and working back them about updates to provide solutions to the customer.
- Experience in preparing the complete documentation for cases we closed and reviewing it when similar cases arrive and assisting other support engineers on this.
- Triage with other teams in resolving integration issues.
- Created new Resource groups, Virtual Machines, in Azure
- Mentoring the team
Environment: Visual Studio 2017, C# 6.0, ASP.NET4.0, VB.Net, MVC 5.0/4.0, K, Logic Apps Azure data Lake store and Azure data factory (V1 and V2), Azure Stream Analytics, Azure Purview ,Apache Hadoop ,Apache Spark,CSS3.0, Sub Version (SVN), JSON, HTML5, Azure, JQuery2.0, IIS 7.0 , Windows XP, Continuous Integration, case buddy, service desk, OL Helper and MS solve
Mindtree Pvt Ltd, WA.
Microsoft CSS Big Data Support
Azure Big Data Support Engineer
July 2016 –Aug 2019
Client: Microsoft Corporation
Responsibilities:
- In-depth understanding of creating and maintaining HDInsight clusters in Azure cloud for data processing.
- Expertise in ACL permissions in Azure Data Lake Store.
- Assist on service requests created by Microsoft end customer on issues related to Azure Big Data products(Azure HDInsight, Azure Databricks, Azure Data Lake store)
- Provide resolutions in a timely manner through emails and screen sharing sessions.
- Verify backend logs (Kusto) to troubleshoot and resolve customer issues.
- Perform repro when required to assist customer with detailed steps.
- Collaborate with product team through Incident management system to come up with logical conclusions on customer reported issues.
- Handle critical situations reported by both professional and premium customers.
- Train new hires in process and technology.
- Active participation in internal knowledge sharing sessions.
- Attend daily triage calls with product team.
- Documenting troubleshooting guides, user guides and release notes.
Environment: Azure HDInsight, Azure Databricks, Azure Data Lake store, Putty, Mobax-terminal, SSMS, Visual Studio, Jarvis, Kusto.
Client: TD Bank
Nov 2012 – Dec 2014
Location: Bangalore
Role: Databricks Engineer
Responsibilities:
- Analyze the Sales & DMS data sets required to be moved to Azure cloud storage and document the data flow strategy for each data set
- Develop and maintain the data flow using Microsoft Azure SQL Server Utilities and BI Tools like SSIS & Bulk copy utility to extract data and generate flat files.
- Develop Azure Logic Apps & Function Apps to move/copy files between Azure Data Lake Store & Azure Blob Storage
- Develop and Azure power shell scripts to export files from On-premises to Cloud storage using Azure power shell SDK.
- Closely work with offshore team and have daily communications as a subject expert and help them to achieve the business targets in terms of coding & unit testing.
- Execute integration testing & User acceptance testing in coordination with client based on the planned releases & deploy the deliverables into Production environment.
- Follow scrum agile methodology and code development & repository using Microsoft VSTS
Environment: Windows and Mac OS X, Visual Studio tools for Azure Data Lake Analytics & Azure Data Factory, Azure Logic Apps & Function Apps, Azure Power Shell ISE, SQL Server Bulk copy utility, SQL Server Integration services & SQL Agent for job scheduling, GIT, JSON C#, T-SQL, Azure power shell & U-SQL.

