The Job logo

What

Where

Specialist Infrastructure - DevOps Azure

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Smart SummaryPowered by Roshi
Join Publicis Sapient as a Cloud & DevOps Specialist to contribute to the transformation of the world. You will design, automate, and support scalable Infrastructure Platforms using DevOps & Cloud tools. Shape the future with your innovative ideas and solutions.

Job Description

Publicis Sapient is looking for a Cloud & DevOps Specialist to join our team of bright thinkers and doers. Our environment and culture foster growth and present exciting opportunities to hone your skills in the industries that we support and in business problem-solving. Contribute ideas for improvements in DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.

Your Impact:

  • The Specialist would bring hands-on technological expertise, passion, and innovation to the table.
  • Will be responsible for designing and enabling Application support, and handling Production farms and various Infrastructure platforms for different delivery teams.
  • In the capacity of a subject matter, experts will be responsible as a system architecture to design and build scalable and efficient Infrastructure Platforms
  • At the same time, specialists will also be responsible for establishing best practices, cultivating thought leadership, and developing common practices/ solutions on Infrastructure.

#LI-REMOTE 

Qualifications

 

Your Skills & Experience:

  • 9 to 12 years of experience in DevOps with a bachelor's in engineering/Technology Or master's in engineering/Computer Applications
  • Expertise in DevOps & Cloud tools:
  • Cloud (Azure, GCP)
  • Version Control (Git, Gitlab, GitHub)
  • Hands-on experience in Container Infrastructure ( Docker, Kubernetes, Hosted solutions)
    • Ability to define container-based environment topology following principles of designing a well-architected framework.
    • Be able to Design and implement advanced aspects using Service Mesh technologies like Istio, Linkerd, Kuma, etc
  • Infrastructure Automation (Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation)
  • Build tools (Ant, Maven, Make, Gradle)
  • Artifact repositories (Nexus, JFrog Artifactory)
  • CI/CD tools on-premises/cloud (Jenkins, TeamCity)
  • Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Prometheus, OWASP, SAST, and DAST)
  • Scripting languages: Python, Ant, Bash, and Shell
  • Hands-on experience in designing pipelines & pipelines as code.
  • Hands-on experience in end-to-end deployment process & strategy
  • Good exposure to tools and technologies used in building a container-based infrastructure.
  • Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, IAM, Security, and integration services with production knowledge on
    • Implementing strategies for reliability requirements
    • Ensuring business continuity
    • Meeting performance objectives
    • Security requirements and controls
    • Deployment strategies for business requirements
    • Cost optimization etc
  • Responsible for managing Installation, configuration, automation, performance, monitoring, Capacity planning, and Availability Management of various Servers and Databases. An expert in automation skills
  • Knowledge of load balancing, CDN options provided by multiple cloud vendors (E.g. Load balancer and Application gateway in Azure, ELB, and ALB in AWS)
  • Good knowledge of network algorithms on failover and availability.
  • Capability to write complex code
    • e.g., automation of recurring/mundane tasks, OS administration (CPU, memory, network performance troubleshooting), also demonstrates strong troubleshooting skills
  • Demonstrates HA/DR design on Cloud platform as per SLAs/RTO/RPO
  • Good knowledge of migrations tools available with cloud vendors and independent providers

Set Yourself Apart With:

  • The capability of estimating the setup time required for Infrastructure and build & release activities.
  • Good Working Knowledge of the Linux Operating System
  • Skill development, knowledge base creation, and toolset optimization of the Practice.
  • Handling Content Delivery Networks and Performing root cause analysis.
  • Understanding of any one of DBMS like MySQL, Oracle, or No SQL like Cassandra, MongoDB, etc.
  • Capacity Planning and Infrastructure estimations.
  • Working understanding of scripting in any one of the languages: BASH/Python/Perl/Ruby
  • Certification in any cloud (Architect or Professional)

Additional Information

  • Gender Neutral Policy
  • 18 paid holidays throughout the year.
  • Generous parental leave and new parent transition program
  • Flexible work arrangements
  • Employee Assistance Programs to help you in wellness and well being

Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value

Set alert for similar jobsSpecialist Infrastructure - DevOps Azure role in Bengaluru, India
Publicis Sapient Logo

Company

Publicis Sapient

Job Posted

a year ago

Job Type

Full-time

WorkMode

Remote

Experience Level

8-12 years

Category

IT Services and IT Consulting

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor or Master

Applicants

Be an early applicant

Related Jobs

Publicis Sapient Logo

Specialist Infrastructure - DevOps GCP

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description Publicis Sapient is looking for a Cloud & DevOps Specialist to join our team of bright thinkers and doers. Our environment and culture foster growth and present exciting opportunities to hone your skills in the industries that we support and in business problem-solving. Contribute ideas for improvements in DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact: The Specialist would bring hands-on technological expertise, passion, and innovation to the table. Will be responsible for designing and enabling Application support, and handling Production farms and various Infrastructure platforms for different delivery teams. In the capacity of a subject matter, experts will be responsible as a system architecture to design and build scalable and efficient Infrastructure Platforms At the same time, specialists will also be responsible for establishing best practices, cultivating thought leadership, and developing common practices/ solutions on Infrastructure. #LI-REMOTE  Qualifications Your Skills & Experience: 9 to 12 years of experience in DevOps with a bachelor's in engineering/Technology Or master's in engineering/Computer Applications Expertise in DevOps & Cloud tools: Cloud (Azure, GCP) Version Control (Git, Gitlab, GitHub) Hands-on experience in Container Infrastructure ( Docker, Kubernetes, Hosted solutions) Ability to define container-based environment topology following principles of designing a well-architected framework. Be able to Design and implement advanced aspects using Service Mesh technologies like Istio, Linkerd, Kuma, etc Infrastructure Automation (Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation) Build tools (Ant, Maven, Make, Gradle) Artifact repositories (Nexus, JFrog Artifactory) CI/CD tools on-premises/cloud (Jenkins, TeamCity) Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Prometheus, OWASP, SAST, and DAST) Scripting languages: Python, Ant, Bash, and Shell Hands-on experience in designing pipelines & pipelines as code. Hands-on experience in end-to-end deployment process & strategy Good exposure to tools and technologies used in building a container-based infrastructure. Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, IAM, Security, and integration services with production knowledge on Implementing strategies for reliability requirements Ensuring business continuity Meeting performance objectives Security requirements and controls Deployment strategies for business requirements Cost optimization etc Responsible for managing Installation, configuration, automation, performance, monitoring, Capacity planning, and Availability Management of various Servers and Databases. An expert in automation skills Knowledge of load balancing, CDN options provided by multiple cloud vendors (E.g. Load balancer and Application gateway in Azure, ELB, and ALB in AWS) Good knowledge of network algorithms on failover and availability. Capability to write complex code e.g., automation of recurring/mundane tasks, OS administration (CPU, memory, network performance troubleshooting), also demonstrates strong troubleshooting skills Demonstrates HA/DR design on Cloud platform as per SLAs/RTO/RPO Good knowledge of migrations tools available with cloud vendors and independent providers Set Yourself Apart With: The capability of estimating the setup time required for Infrastructure and build & release activities. Good Working Knowledge of the Linux Operating System Skill development, knowledge base creation, and toolset optimization of the Practice. Handling Content Delivery Networks and Performing root cause analysis. Understanding of any one of DBMS like MySQL, Oracle, or No SQL like Cassandra, MongoDB, etc. Capacity Planning and Infrastructure estimations. Working understanding of scripting in any one of the languages: BASH/Python/Perl/Ruby Certification in any cloud (Architect or Professional) Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value

Publicis Sapient Logo

Senior Associate Infrastructure L2 - DevOps GCP

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our client’s businesses. Lead and support the implementation of the Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability, and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time to market for their products or services. #LI-REMOTE  Qualifications Your Skills & Experience: 5 to 9 years of experience in Cloud & DevOps with Full-time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: Expertise in at least one Cloud Must Have GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit)  Azure (Virtual Machines, Azure Active Directory, Virtual Network, Blob Storage, Functions, Database, Azure Service Bus, Azure Monitor) AWS (EC2, IAM, VPC, S3, Lambda, RDS, SNS, Cloud Watch) Configuration and monitoring DNS, APP Servers, Load Balancer, and Firewall for high-volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On-Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement, and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing, and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo, and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans, etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging, and alerting using the below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification   Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Publicis Sapient Logo

Manager Data Engineering DE - Big Data Azure

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description As Manager, Data Engineering, you will be responsible for translating client requirements into design, architecting, and implementing Azure Cloud-based big data solutions for clients. Your role will be focused on delivering high-quality solutions by independently driving design discussions related to Data Ingestion, Transformation & Consumption, Data Storage and Computation Frameworks, Performance Optimizations, Infrastructure, Automation & Cloud Computing, and Data Governance & Security. The role requires a hands-on technologist with expertise in Big Data solution architecture and with a strong programming background in Java / Scala / Python. Your Impact: Provide technical leadership and hands-on implementation role in the areas of data engineering including data ingestion, data access, modeling, data processing, visualization, design, and implementation. Lead a team to deliver high quality big data technologies-based solutions on Azure Cloud. Manage functional & nonfunctional scope and quality. Help establish standard data practices like governance and address other non-functional issues like data security, privacy, and quality. Manage and provide technical leadership to a data program implementation based on the requirement using agile technologies. Participate in workshops with clients and align client stakeholders to optimal solutions. Consulting, Soft Skills, Thought Leadership, Mentorship etc. People management, contributing to hiring and capability building. #LI-REMOTE  Qualifications Your Skills & Experience: Overall 8+ years of IT experience with 3+ years in Data related technologies, and expertise of 1+ years in data-related Azure Cloud services and delivered at least 1 project as an architect. Mandatory to have knowledge of Big Data Architecture Patterns and experience in the delivery of end-to-end Big Data solutions on Azure Cloud. Expert in programming languages like Java/ Scala and good to have Python Expert in at least one distributed data processing framework: Spark (Core, Streaming, SQL), Storm or Flink, etc. Expert in Hadoop eco-system with Azure cloud distribution and worked at least on one or more big data ingestion tools (Sqoop, Flume, NiFI, etc), distributed messaging and ingestion frameworks (Kafka, Pulsar, Pub/Sub, etc) and good to know traditional tools like Informatica, Talend, etc Should have worked on any NoSQL solutions like Mongo DB, Cassandra, HBase, etc, or Cloud-based NoSQL offerings like DynamoDB, Big Table, etc. Good Exposure in development with CI / CD pipelines. Knowledge of containerization, orchestration, and Kubernetes engine would be an added advantage. Set Yourself Apart With: Certification on Azure cloud platform or big data technologies. Strong analytical and problem-solving skills. Excellent understanding of data technologies landscape/ecosystem. A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements  Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Publicis Sapient Logo

Manager Data Engineering DE - Big Data AWS

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description As Manager, Data Engineering, you will be responsible for translating client requirements into design, architecting, and implementing AWS Cloud-based big data solutions for clients. Your role will be focused on delivering high-quality solutions by independently driving design discussions related to Data Ingestion, Transformation & Consumption, Data Storage and Computation Frameworks, Performance Optimizations, Infrastructure, Automation & Cloud Computing, and Data Governance & Security. The role requires a hands-on technologist with expertise in Big Data solution architecture and with a strong programming background in Java / Scala / Python. Your Impact: Provide technical leadership and hands-on implementation role in the areas of data engineering including data ingestion, data access, modeling, data processing, visualization, design, and implementation. Lead a team to deliver high quality big data technologies-based solutions on AWS Cloud. Manage functional & nonfunctional scope and quality. Help establish standard data practices like governance and address other non-functional issues like data security, privacy, and quality. Manage and provide technical leadership to a data program implementation based on the requirement using agile technologies. Participate in workshops with clients and align client stakeholders to optimal solutions. Consulting, Soft Skills, Thought Leadership, Mentorship etc. People management, contributing to hiring and capability building. #LI-REMOTE  Qualifications Your Skills & Experience: Overall 8+ years of IT experience with 3+ years in Data related technologies, and expertise of 1+ years in data-related AWS Cloud services and delivered at least 1 project as an architect. Mandatory to have knowledge of Big Data Architecture Patterns and experience in the delivery of end-to-end Big Data solutions on AWS Cloud. Expert in programming languages like Java/ Scala and good to have Python Expert in at least one distributed data processing framework: Spark (Core, Streaming, SQL), Storm or Flink, etc. Expert in Hadoop eco-system with AWS cloud distribution and worked at least on one or more big data ingestion tools (Sqoop, Flume, NiFI, etc), distributed messaging and ingestion frameworks (Kafka, Pulsar, Pub/Sub, etc) and good to know traditional tools like Informatica, Talend etc Should have worked on any of NoSQL solutions like Mongo DB, Cassandra, HBase etc or Cloud based NoSQL offerings like DynamoDB , Big Table etc. Good Exposure in development with CI / CD pipelines. Knowledge of containerization, orchestration and kubernetes engine would be an added advantage. Set Yourself Apart With: Certification on AWS cloud platform or big data technologies. Strong analytical and problem-solving skills. Excellent understanding of data technologies landscape / ecosystem. A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work.   Additional Information Gender Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements  Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.