IBM Company is a data consultancy that empowers data-driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture data engineering analytics and data science. We are renowned for bringing our clients deep expertise being easy to work with and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced dynamic environment where everyoneβs input and efforts are valued. We hire outstanding individuals and allow them to thrive in a collaborative atmosphere that values learning growth and hard work. Our team is distributed across North America Latin America and Europe. If you have the desire to be a part of an exciting challenging and rapidly growing Snowflake consulting services company and if you are passionate about making a difference in this world we would love to talk to you!.
In this role you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers) where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology.
As an entry-level Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering storage and both batch and real-time processing.
Collaborating closely with diverse teams you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer you'll tackle obstacles related to database integration and untangle complex unstructured data sets.
In this role your responsibilities may include:
* Implement predictive/statistical models with big data and ML techniques.
* Design enterprise search applications (Elasticsearch Splunk).
* Collaborate in Agile teams with scientists engineers and consultants.
* Cleanse/integrate data efficiently; develop and evaluate predictive models.
* Design and develop Snowflake Data Cloud solutions: ingestion pipelines architecture governance and security.
Data Pipeline & Engineering
* Build scalable pipelines using SQL Python dbt and cloud-native tools.
* Apply ELT/ETL patterns for batch incremental and CDC processes.
* Create data models (Dimensional Data Vault Lakehouse).
* Develop transformations data quality checks and business logic.
Platform & Cloud
* Work with Snowflake Databricks AWS Azure or GCP for high-performance solutions.
* Implement warehouse/lakehouse structures schema management and governance.
* Use orchestration tools (Airflow dbt Cloud etc.).
Client Delivery & Consulting
* Translate requirements into technical tasks; communicate progress and blockers.
* Participate in demos stand-ups and architecture discussions document solutions.
Data Quality Governance & Security
* Ensure data accuracy through validation and monitoring.
* Apply security best practices: RBAC masking encryption compliance.
* Contribute to standards and templates.
Collaboration & Growth
* Work with senior engineers on design; participate in code reviews.
* Mentor junior engineers; share knowledge; build reusable assets.
Continuous Learning & Innovation
* Stay updated on cloud Snowflake Databricks and modern data stack.
* Explore new tools and methodologies; contribute innovative ideas.
Senior-level expertise in SQL and development of ELT/ETL pipelines.
Senior experience with at least one major cloud platform (AWS Azure or GCP).
Advanced knowledge of modern data platforms such as Snowflake Databricks BigQuery Redshift or Synapse.
Proficiency in Python or similar programming languages for complex data engineering tasks.
Strong understanding of data modeling techniques (Dimensional Data Vault Lakehouse) combined with experience in CI/CD workflows and version control tools (GitHub Bitbucket).
.