In this role you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.
You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat.
Curiosity and a constant quest for knowledge serve as the foundation to success in IBM Consulting. In your role you'll be encouraged to challenge the norm investigate ideas outside of your role and come up with creative solutions resulting in ground breaking impact for a wide network of clients. Our culture of evolution and empathy centers on long-term career growth and development opportunities in an environment that embraces your unique skills and experience.
In this role you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As an Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering storage and both batch and real-time processing.
Collaborating closely with diverse teams you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer you'll tackle obstacles related to database integration and untangle complex unstructured data sets.
In this role your responsibilities may include:
Data Pipeline & Engineering Execution
- Design and maintain scalable data pipelines and ingestion processes using SQL Python dbt and cloud-native tools.
- Implement ELT/ETL patterns for batch incremental and CDC pipelines.
- Build data models following best practices (Dimensional Data Vault Lakehouse).
- Develop transformations data quality checks and business logic.
Platform & Cloud Engineering
- Build efficient data solutions on platforms like Snowflake Databricks AWS Azure or GCP.
- Implement warehouse/lakehouse structures storage standards schema management and governance.
- Support orchestration with Airflow dbt Cloud or similar tools.
Client Delivery & Consulting Support
- Collaborate with stakeholders to translate requirements into technical tasks.
- Communicate progress decisions and blockers clearly.
- Participate in demos stand-ups and architecture discussions.
- Document technical solutions and handover materials.
Data Quality Governance & Security
- Implement validation and monitoring to ensure data accuracy and reliability.
- Apply security best practices: RBAC masking encryption and compliance.
- Contribute to documentation standards and templates.
Collaboration & Team Growth
- Work with senior engineers and architects on solution design.
- Participate in code reviews and adopt engineering standards.
- Mentor junior engineers and share knowledge internally.
- Contribute to accelerators playbooks and reusable assets.
Continuous Learning & Innovation
- Stay updated on cloud data engineering and modern data stack technologies.
- Explore new tools and methodologies to improve efficiency.
- Bring innovative ideas and participate in internal initiatives.
Mid-level Data Engineer with strong background in data engineering or similar roles.
Advanced SQL expertise and hands-on experience building efficient data pipelines.
Proficient in Python (or similar programming languages) for development and automation.
Skilled in cloud data platforms such as Snowflake Databricks BigQuery Redshift or Synapse.
Familiar with data modeling concepts (Dimensional Data Vault Lakehouse) and experienced in version control (GitHub Bitbucket) and CI/CD workflows.