The Job logo

What

Where

Data Engineer

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Job description

Join us as a Data Engineer

  • This is an exciting opportunity to use your technical expertise to collaborate with colleagues and build effortless, digital first customer experiences
  • You’ll be simplifying the bank by developing innovative data driven solutions, using insight to be commercially successful, and keeping our customers’ and the bank’s data safe and secure
  • Participating actively in the data engineering community, you’ll deliver opportunities to support the bank’s strategic direction while building your network across the bank
  • We're offering this role at associate level

What you'll do

As a Data Engineer, you’ll play a key role in driving value for our customers by building data solutions. You’ll be carrying out data engineering tasks to build, maintain, test and optimise a scalable data architecture, as well as carrying out data extractions, transforming data to make it usable to data analysts and scientists, and loading data into data platforms.

You’ll also be:

  • Developing comprehensive knowledge of the bank’s data structures and metrics, advocating change where needed for product development
  • Practicing DevOps adoption in the delivery of data engineering, proactively performing root cause analysis and resolving issues
  • Collaborating closely with core technology and architecture teams in the bank to build data knowledge and data solutions
  • Developing a clear understanding of data platform cost levers to build cost effective and strategic solutions
  • Sourcing new data using the most appropriate tooling and integrating it into the overall solution to deliver for our customers

The skills you'll need

To be successful in this role, you’ll need a good understanding of data usage and dependencies with wider teams and the end customer, as well as experience of extracting value and features from large scale data. You'll have experience of data warehouse and data lake projects and strong knowledge of data engineering tech stack like Spark architecture, SQL, Python, Pyspark.

You'll also need a good understanding of cloud tech stack and AWS services like EMR, IAM, S3 and Devops like Ci/CD, gitlab, gitlab runners.

You’ll also demonstrate:

  • Experience of ETL technical design, including data quality testing, cleansing and monitoring, and data warehousing and data modelling capabilities
  • Experience of using programming languages alongside knowledge of data and software engineering fundamentals
  • Good knowledge of modern code development practices
  • Strong communication skills with the ability to proactively engage with a wide range of stakeholders
Set alert for similar jobsData Engineer role in Bengaluru, India
NatWest Group Logo

Company

NatWest Group

Job Posted

3 months ago

Job Type

Full-time

WorkMode

On-site

Experience Level

0-2 Years

Category

Data & Analytics

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor

Applicants

430 applicants

Related Jobs

NatWest Group Logo

Data & Analytics Analyst

NatWest Group

Bengaluru, Karnataka, India

Posted: 2 months ago

Job description Join us as a Data & Analytics Analyst Take on a new challenge in Data & Analytics and help us shape the future of our business You’ll be helping to manage the analysis of complex data to identify business issues and opportunities, and supporting the delivery of high quality business solutions We're committed to mapping a career path that works for you, with a focus on helping you build new skills and engage with the latest ideas and technologies in data analytics We're offering this role at associate level   What you'll do As a Data & Analytics Analyst, you’ll be planning and providing high quality analytical input to support the development and implementation of innovative processes and problem resolution. You’ll be capturing, validating and documenting business and data requirements, making sure they are in line with key strategic principles. We’ll look to you to interrogate, interpret and visualise large volumes of data to identify, support and challenge business opportunities and identify solutions.   You’ll also be: Performing data extraction, storage, manipulation, processing and analysis Conducting and supporting options analysis, identifying the most appropriate solution Helping to maintain full traceability and linkage of business requirements of analytics outputs Seeking opportunities to challenge and improve current business processes, ensuring the best result for the customer Creating and executing quality assurance at various stages of the project in order to validate the analysis and to ensure data quality, identify data inconsistencies, and resolve as needed   The skills you'll need You’ll need a background in business analysis tools and techniques, along with the ability to influence through communications tailored to a specific audience. Additionally, you’ll need the ability to use core technical skills.   You’ll also demonstrate: A qualification in B.Tech or MCA with a strong programming knowledge in C#, VB and Python Strong analytic and problem solving abilities Accountable for the full traceability and linkage of business requirements to RPA outputs A background in RPA development tools and techniques, along with the ability to influence through communications tailored to a specific audience. Additionally, you’ll need the ability to use core technical skills UiPath Automation Developer Professional is preferred

Capgemini Logo

Data Engineer

Capgemini

Bengaluru, Karnataka, India

Posted: 9 months ago

At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Job Description:                Expert knowledge in Python                Expert knowledge in popular machine learning libraries and frameworks, such as TensorFlow, Keras, scikit-learn.                Proficient understanding and application of clustering algorithms (e.g., K-means, hierarchical clustering) for grouping similar data points.                Expertise in classification algorithms (e.g., decision trees, support vector machines, random forests) for tasks such as image recognition                Natural language Processing and recommendation systems.                Proficiency in working with databases, both relational and non-relational like MySQL with experience in designing database schemas                And optimizing queries for Efficient data retrieval.                Strong knowledge in areas like Object Oriented Analysis and Design, Multi-threading, Multi process handling and Memory management.                Good knowledge model evaluation metrics and techniques.                Experience in deploying machine learning models to production environments.                Currently working in an Agile scrum team and proficient in using version control systems (e.g., Git) for collaborative development.  Primary Skill:              Excellent in Python Coding              Excellent in Communication Skills              Good in Data modelling, popular machine learning libraries and framework

Zscaler Logo

Associate Data Engineer

Zscaler

Bengaluru, Karnataka, India

Posted: 2 months ago

About Zscaler Serving thousands of enterprise customers around the world including 40% of Fortune 500 companies, Zscaler (NASDAQ: ZS) was founded in 2007 with a mission to make the cloud a safe place to do business and a more enjoyable experience for enterprise users. As the operator of the world’s largest security cloud, Zscaler accelerates digital transformation so enterprises can be more agile, efficient, resilient, and secure. The pioneering, AI-powered Zscaler Zero Trust Exchange™ platform, which is found in our SASE and SSE offerings, protects thousands of enterprise customers from cyberattacks and data loss by securely connecting users, devices, and applications in any location. Named a Best Workplace in Technology by Fortune and others, Zscaler fosters an inclusive and supportive culture that is home to some of the brightest minds in the industry. If you thrive in an environment that is fast-paced and collaborative, and you are passionate about building and innovating for the greater good, come make your next move with Zscaler.  Our Engineering team built the world’s largest cloud security platform from the ground up, and we keep building. With more than 100 patents and big plans for enhancing services and increasing our global footprint, the team has made us and our multitenant architecture today's cloud security leader, with more than 15 million users in 185 countries. Bring your vision and passion to our team of cloud architects, software engineers, security experts, and more who are enabling organizations worldwide to harness speed and agility with a cloud-first strategy. We're looking for an experienced  Associate Data Engineer, Enterprise Data Platform to join our IT/Data Strategy team. Reporting to the Staff Data Engineer, you will be responsible for : Designing constructing and maintaining efficient data pipelines and integrations to address the organization's analytics and reporting needs Partnering with architects, integration, and engineering teams to gather requirements and deliver impactful and reliable data solutions Identifying and source data from multiple systems, profiling datasets to ensure they support informed decision-making processes Optimizing existing pipelines and data models while creating new features aligned with business objectives and improved functionality Implementing data standards, leverage cloud and big data advancements, and use tools like Snowflake, DBT, and AWS to deliver innovative data solutions What We're Looking for (Minimum Qualifications) 0-2 years in data warehouse design, development, SQL, and data modeling with proficiency in efficient query writing for large datasets Skilled in Python for API integration, data workflows, and hands-on expertise with ELT tools (e.g., Matillion, Fivetran, DBT) Extensive experience with AWS services (EC2, S3, Lambda, Glue) and CI/CD processes, Git, and Snowflake concepts Knowledge of CI/CD processes, & Git. Eagerness to learn GenAI technologies Strong analytical skills and an ability to manage multiple projects simultaneously. What Will Make You Stand Out (Preferred Qualifications) Experience with Data Mesh architecture Knowledge of foundational concepts of machine learning models