Job Description
Job Overview
We are seeking a Data Engineer who is eager to take initiative and grow within a modern, cloud-based data engineering ecosystem. This role is ideal for candidates with a solid foundation in data and programming concepts who want hands-on experience building, supporting, and improving data pipelines and analytics platforms at scale.
This role provides hands-on exposure to enterprise-scale cloud data platforms, real-world data challenges, SaaS integrations, and analytics use cases. You will benefit from mentorship from senior data engineers and architects and work closely with business stakeholders to help design, implement, and operate reliable data solutions while developing strong fundamentals in data engineering best practices. We are looking for a proactive, curious learner who is comfortable experimenting, asking questions, learning from feedback, and balancing technical execution with collaboration across teams.
About Us:-
When you join iCIMS, you join the team helping global companies transform business and the world through the power of talent. Our customers do amazing things: design rocket ships, create vaccines, deliver consumer goods globally, overnight, with a smile. As the Talent Cloud company, we empower these organizations to attract, engage, hire, and advance the right talent. We’re passionate about helping companies build a diverse, winning workforce and about building our home team. We’re dedicated to fostering an inclusive, purpose-driven, and innovative work environment where everyone belongs.
Job Description :-
| Company: | iCIMS |
| Job Role: | Data Engineer |
| Batches: | 2021-2025 |
| Degree: | Bachelor’s degree |
| Experience: | Freshers/Experienced |
| Location: | Hyderabad, India |
| CTC/Salary: | INR 6-14 LPA (Expected) |
Responsibilities
- Assist in building, maintaining, and optimizing ETL / ELT data pipelines that process structured and semi-structured data
- Support data ingestion and transformation workflows across cloud platforms such as Azure, Snowflake, Amazon S3, or Redshift
- Write and maintain SQL for data transformation, validation, reconciliation, and analytics
- Contribute to data solutions following medallion architecture principles (Bronze, Silver, Gold layers)
- Assist with data modeling, data profiling, parsing, and data migration activities
- Monitor data pipelines and help identify data quality issues, root causes, and remediation strategies
- Develop and maintain basic automated tests and data reconciliation scripts to improve reliability
- Participate in proofs of concept (PoCs), pilots, and prototypes, helping identify metrics and success criteria
- Work with senior engineers to apply DevOps best practices, including version control, CI/CD, and environment management
- Collaborate directly with business teams to understand data requirements and clearly communicate results
- Support integrations with systems such as Salesforce, NetSuite, APIs, and other SaaS applications to enable reliable data ingestion and downstream analytics
- Learn and apply data privacy, security, and governance principles in day-to-day work
Qualifications
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related field (or equivalent experience)
- Foundational knowledge of data engineering technology stacks and data pipeline concepts
- Hands-on experience with SQL and relational data concepts
- Programming experience in Python, Java, Scala, Ruby, or similar languages
- Basic understanding of ETL processes and extracting insights from large datasets
- Familiarity with cloud-based data platforms through coursework, projects, or professional experience
- Ability to learn quickly, take ownership, and work independently with guidance
- Strong problem-solving skills and attention to detail
- Good written and verbal communication skills
Preferred
- Exposure to Databricks, Apache Spark, or distributed data processing concepts
- Familiarity with Azure Data Factory (ADF) or similar orchestration tools
- Experience working with Salesforce data or CRM systems
- Awareness of data quality best practices, profiling techniques, and validation strategies
- Experience with messaging or integration platforms such as Kafka, MuleSoft, or similar tools
- Familiarity with Alteryx or other data preparation / data blending tools
- Understanding of data privacy practices and regulations
- Interest in analytics, reporting, and enabling data-driven decision-making.
Apply Through This Link: Click Here
Join our Telegram group: Click here
Follow us on Instagram: Click here
Follow us on WhatsApp: Click here
