Data Engineer-Data Platforms

IBM Pune, IN

Company

IBM

Location

Pune, IN

Type

Full Time

Job Description

Introduction

In this role you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

In this role you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.

Your role and responsibilities

As a Big Data Engineer you will develop maintain evaluate and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
Your primary responsibilities include:
* Design build optimize and support new and existing data models and ETL processes based on our clients business requirements.
* Build deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
* Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.

Required education
Bachelor's Degree
Preferred education
Master's Degree
Required technical and professional expertise

* Must have 5+ years exp in Big Data -Hadoop Spark -Scala Python
* Hbase Hive Good to have Aws -S3
* athena Dynomo DB Lambda Jenkins GIT
* Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
* Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.

Preferred technical and professional experience

* Understanding of Devops.
* Experience in building scalable end-to-end data ingestion and processing solutions
* Experience with object-oriented and/or functional programming languages such as Python Java and Scala

Apply Now

Date Posted

12/04/2025

Views

0

Back to Job Listings ❤️Add To Job List Company Info View Company Reviews
Positive
Subjectivity Score: 0.9

Similar Jobs

Data Engineer-Data Platforms-AWS - IBM

Views in the last 30 days - 0

This job description outlines a Data Engineer role at IBM Consulting requiring expertise in big data technologies like Spark Hadoop and Azure cloud pl...

View Details

Data Engineer-Data Platforms-AWS - IBM

Views in the last 30 days - 0

This job description outlines a Data Engineer role at IBM Consulting requiring expertise in big data technologies like Spark Hadoop and Azure cloud pl...

View Details

Data Engineer-Data Platforms-AWS - IBM

Views in the last 30 days - 0

This job description outlines a Data Engineer role at IBM emphasizing technical expertise innovation and collaboration Responsibilities include design...

View Details

Data Engineer-Data Integration - IBM

Views in the last 30 days - 0

This job description outlines a Software Developer role at IBM involving data analysis collaboration with teams and technical expertise in data integr...

View Details

Data Engineer-Data Integration - IBM

Views in the last 30 days - 0

This job description outlines a role at IBM Consultings delivery centers focusing on technical expertise in data warehousing cloud technologies and pr...

View Details

Quality Engineer-Data - IBM

Views in the last 30 days - 0

This job description outlines a role at IBM Consultings delivery centers emphasizing technical expertise teamwork training skills and specific qualifi...

View Details