Primary Skills :
Databricks with Pyspark, Python
# 7+ years of experience with detailed knowledge of data warehouse technical architectures, ETL/ ELT and reporting/analytic tools.
# 4+ years of work experience in Databricks, Pyspark and Python project development work
# Expertise in Bigdata Eco systems like HDFS and Spark
# Strong hands-on experience on Python or Scala.
# Hands on experience on Azure Data Factory.
# Able to convert the SQL stored procedures to Python code in Pyspark frame work using Dataframes.
# Design and develop SQL Server stored procedures, functions, views and triggers to be used during the ETL process.
# SQL SERVER development work experience with relational databases and knowledge is a must
# Development of Stored Procedures for transformations in ETL pipeline
# Should have work experience in Agile projects
# Write and maintain documentation of the ETL processes via process flow diagrams.
# Collaborate with business users, support team members, and other developers throughout the organization to help everyone understand issues that affect the data warehouse
# Good experience on customer interaction is required.
# Possesses good interpersonal and communication skills
Time and Venue
27 July , 9.30 AM – 5.30 PMWill share shortly
Contact – Priya Mishra
If interested, pls share the below details along with your updated resume on PriyaM4@hexaware.com.If interested, pls share the below details along with your updated resume on PriyaM4@hexaware.com.
Total Experience-
Current CTC-
Expected CTC-
Notice Period-
Current Location-
Available for F2F on 27th July in Bangalore-