Red Hat | Data Engineer | Apply Now

Join WhatsApp

Join Now

Join Telegram

Join Now

To be successful in this role, you should have an established set of foundational skills and the ability to learn new skills quickly as we modernize platforms and tooling. You must also be able to work with minimal supervision in a fast-paced and ambiguous environment. You will be accountable for translating and manipulating large datasets, and creating and maintaining software and tools that deliver data and insights to the right people at the right time.

Our ideal candidate has an interest in AI/ML solutions, has experience collaborating across multi-disciplinary teams, and has demonstrated experience partnering with business leaders to deliver impactful assets and solutions.

Job Description :-
Company:Red Hat
Job Role:Data Engineer
Batches:2021-2025
Degree:Bachelor’s/Master’s degree
Experience:Experienced
Location:Pune, India
CTC/Salary:INR 4.5-12 LPA (Expected)

What will you do:

  • Work closely with team members and stakeholders to turn business problems into analytical projects, translated requirements, and solutions
  • Work cross-functionally with teams on data migration, translation, and organizational initiatives
  • Translate large volumes of raw, unstructured data into highly visual and easily digestible formats
  • Develop and manage data pipelines for predictive analytics modeling, model lifecycle management, and deployment (Extract-Load-Transform, ELT)
  • Recommend ways to improve data reliability, efficiency, and quality
  • Help create, maintain, and implement tools, libraries, and systems to increase the efficiency and scalability of the team
  • Develop and maintain proper controls and governance for data access
  • Communicate data-related challenges and help to prioritize resolutions based on alignment with organizational goals
  • Consult and assist consumers of XE managed data

What will you bring:

  • Ability to critically analyze data, testing hypothesis, and validating data quality
  • Ability to problem solve and to test and implement new technologies and tools
  • Solid grasp of data systems and how they interact with each other
  • Exceptional analytical skills to detect the source and resolution of highly complex problems
  • Proficient Python programming skills are required and experience with Python-based analysis frameworks such as pandas, pyspark and pyarrow a plus
  • Experience with Starburst, Snowflake and other Cloud Data Warehousing / Data Lake preferred
  • Excellent data manipulation skills required, namely using SQL and the Python Scientific stack (pandas, numpy, sci-kit learn)
  • Experience extracting unstructured data from REST APIs, NoSQL databases, and object storage (Ceph/S3)
  • Experience with Linux system administration, shell scripting, and virtualization technology (containers) is required
  • Mastery of git (version control) and experience with versioning, merge request, review, etc. processes and techniques is required
  • Experience with distributed computing frameworks (eg., dask, pyspark) preferred
  • OpenShift application development and administration is a plus
  • Experience deploying applications using PaaS technologies (e.g,. OpenShift, Airflow) is a plus
  • Well-versed and a desire to stay on top of the current industry landscape of computer software, programming languages, and technology
  • Bachelor’s degree in a related field (e.g., Computer Science or Software Engineering) with 2+ years of relevant working experience or Masters degree with 1+ years of working experience.

Apply Through This Link: Click Here

Join our Telegram group: Click here

Follow us on Instagram: Click here

Follow us on WhatsApp: Click here

Leave a Comment