Analyst, Data Engineer – Hadoop, Hive, PySpark (Standard Chartered Bank)

Join WhatsApp

Join Now

Join Telegram

Join Now

Position Overview

We are seeking a motivated Big Data Developer with strong expertise in Hadoop ecosystem, Hive, PySpark, and SQL to join our team. The ideal candidate should possess excellent programming, problem-solving, and analytical skills, along with the ability to adapt to new technologies and frameworks. This role requires working closely with development teams and business stakeholders to design, build, and optimize scalable data solutions.

Job Description :-
Company:Standard Chartered
Job Role:Analyst, Data Engineer
Batches:2021-2025
Degree:Diploma / Bachelor’s Degree
Experience:Freshers/Experienced
Location:Bangalore, INDIA
CTC/Salary:INR 5-15 LPA (Expected)

Key Responsibilities

Processes

  • Write, debug, and optimize big data applications using Hadoop, HDFS, Hive, and Spark.
  • Work with raw and unstructured data, transforming it into usable formats.
  • Design and build ELT/ETL pipelines in Hadoop and ensure smooth data integration.
  • Ensure compliance with coding standards, version control, and branching strategies.
  • Collaborate with cross-functional teams to deliver high-quality solutions in line with Agile and Waterfall methodologies.
  • Support job orchestration tools such as Control M and enhance automation.
  • Contribute to reporting and analytics by integrating with tools like Tableau, Dataiku, or MicroStrategy (preferred).
  • Participate in software development lifecycle activities including requirement analysis, development, testing, and deployment.

Regulatory & Business Conduct

  • Demonstrate exemplary conduct and live by the Group’s Values and Code of Conduct.
  • Ensure strict adherence to regulatory and compliance requirements in all deliverables.
  • Take ownership of identifying, mitigating, and resolving risks and compliance matters.
  • Contribute to the Bank’s Conduct Principles: Fair outcomes for clients, effective financial markets, compliance with financial crime regulations, and fostering the right environment.

Key Stakeholders

  • FCSO Development Teams
  • FCSO Business Units

Skills & Experience

  • Strong expertise in:
    • Hadoop ecosystem (HDFS, Hive, Spark)
    • SQL and advanced data querying
    • PySpark for big data processing
  • Familiarity with:
    • Azure DevOps for CI/CD
    • Control M or other job orchestration tools
    • Enterprise Data Warehouse and Reference Data Management
    • Reporting tools (Tableau, Dataiku, MSTR – preferred)
  • Strong understanding of Agile and Waterfall SDLC methodologies.
  • Excellent communication, problem-solving, and debugging skills.

Qualifications

  • Diploma / Bachelor’s Degree in Computer Science, Engineering, or a related field.

Apply Through This Link: Click Here

Join our Telegram group: Click here

Follow us on Instagram: Click here

Follow us on WhatsApp: Click here

Leave a Comment