IBM | Data Engineer| Data Integration & Pipeline Optimization

Join WhatsApp

Join Now

Join Telegram

Join Now

Join Our Team as a Big Data Engineer and Drive Scalable Data Solutions

We are looking for a passionate and skilled Big Data Engineer to join our dynamic team and contribute to building cutting-edge data pipelines and integration solutions. In this role, you will design, develop, and maintain robust ETL workflows using Ab Initio, transforming complex data requirements into high-performing data systems that support strategic business needs.

Job Description :-
Company:IBM
Job Role:Data Engineer
Batches:2022-2025
Degree:Bachelor’s degree
Experience:Freshers
Location:Pune, INDIA
CTC/Salary:INR 4.5-6.5 LPA (Expected)

Key Responsibilities:

  • Design and Develop ETL Pipelines: Create scalable and efficient ETL processes using Ab Initio to extract, transform, and load data from diverse sources to various targets.
  • Data Modeling & Analysis: Analyze business data needs and translate them into optimized data models that support performance and scalability.
  • Component Utilization: Build ETL workflows using key Ab Initio components such as Transform Functions, Join, Rollup, Normalize, and others.
  • Data Quality Assurance: Implement robust data validation and quality checks to ensure clean and accurate data across pipelines.
  • Performance Optimization: Tune Ab Initio graphs and ETL processes for maximum performance and resource efficiency.
  • Collaboration: Work closely with data architects, business analysts, QA teams, and DBAs to ensure seamless delivery and integration of data solutions.
  • Security & Access: Enable secure and efficient data access for analysts and data scientists across the organization.
  • Documentation: Maintain clear and concise technical documentation for all ETL designs and data workflows.
  • Support & Maintenance: Monitor, evaluate, and improve data workflows to meet evolving business and data infrastructure requirements.

Required Qualifications:

  • Education: Bachelor’s degree in Computer Science, Information Technology, or a related field (Master’s preferred).
  • Experience: Proven experience in developing and maintaining ETL workflows using Ab Initio.
  • Skills:
    • Proficiency in Ab Initio components and best practices for reusable ETL designs.
    • Strong understanding of data modeling, data transformation, and data pipeline optimization.
    • Familiarity with data governance, security protocols, and performance tuning.
    • Experience working in cross-functional, collaborative teams.

Preferred Experience:

  • Experience in performance tuning, troubleshooting, and supporting large-scale data systems.
  • Ability to participate in technical design reviews and contribute expert recommendations.
  • Exposure to working in agile environments and fast-paced data-driven organizations.

Apply Through This Link: Click Here

Join our Telegram group: Click here

Follow us on Instagram: Click here

Follow us on WhatsApp: Click here

Leave a Comment