Data Engineer

About IndiaMART : 

IndiaMART is India’s largest online B2B marketplace that allows manufacturers, suppliers, and exporters to propose their products directly through the platform to get contacted by its visitors. Founded in 1996 by Dinesh Agarwal, an alumnus of HBTI Kanpur, IndiaMART has been recognized as a platform that focuses on providing a platform to Small & Medium Enterprises (SMEs), large enterprises as well as individuals. Our mission is ‘to make doing business easy’.

Headquartered in Noida and with an employee strength of more than 2,826 across 31 branches, we not only serve the SMEs, but have also become a key growth partner for Large Corporates. With a rating of 4.7, our App is not only one of the highest-rated Apps on the Play Store but also has also been recognized as the Best Business App by Global Mobile App Summit Awards (GMASA) – (2017).

Qualification:

B.Tech /BE

Experience: 

2+ years

Location: 

Noida, UttarPradesh (remote till office opens)

Roles and responsibilities:

The data warehouse at IndiaMART powers the data requirements across product teams on the marketplace. Over years we have achieved good scale and reliability with our warehouse, powering complex and dynamic reports and housing data with the slice and dice required by the stakeholders. We are looking for enthusiastic and fast-paced individuals to join our team of data geeks.

We have our data hosted on both Azure and AWS warehouses. We are looking for someone who is good in either of the warehouse technologies (Azure or AWS) and open to working/learning the other.

As Data Engineer at our warehouse, your primary responsibilities will b​e:-

· Apply your strong technical experience in building reports and governance models for BI analytics that support a broad range of analytical requirements across the company
· Work closely with product teams on the marketplace to continually evolve solutions as business processes and the requirements change
· Extract, transform, and load data from a wide variety of data sources using SQL and AWS big data technologies
· Creation and support of real-time data pipelines built on AWS technologies including Glue, Redshift/Spectrum, Kinesis, EMR, and Athena or on Azure data factory
· Ability to move data from any sources to Azure Blob/Data Lake/Azure SQL DB/Azure SQL DW
· Design, Develop and Deliver data integration interfaces in ADF and Azure Databricks
· Deliver data models on Azure platform, it could be on Azure Synapse or SQL
· Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers

Qualifications:

· 2+ years of industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets
· Demonstrated strength in data modeling, ETL development, and data warehousing
· Experience using big data technologies (Hadoop, Hive, Hbase, Spark, etc.)
· Has working knowledge on Spark, Scala, Azure – ADF, Synapse Analytics, Azure Databricks, ADLS Gen1 and Gen2, Blob, SQL Database, PL/SQL
· Knowledge of data management fundamentals and data storage principles
· Experience using business intelligence reporting tools preferred (Power BI, Tableau, etc.)