The Job logo

What

Where

Specialist Infrastructure - DevOps GCP

ApplyJoin for More Updates

You must Sign In before continuing to the company website to apply.

Job Description

Publicis Sapient is looking for a Cloud & DevOps Specialist to join our team of bright thinkers and doers. Our environment and culture foster growth and present exciting opportunities to hone your skills in the industries that we support and in business problem-solving. Contribute ideas for improvements in DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions.

Your Impact:

  • The Specialist would bring hands-on technological expertise, passion, and innovation to the table.
  • Will be responsible for designing and enabling Application support, and handling Production farms and various Infrastructure platforms for different delivery teams.
  • In the capacity of a subject matter, experts will be responsible as a system architecture to design and build scalable and efficient Infrastructure Platforms
  • At the same time, specialists will also be responsible for establishing best practices, cultivating thought leadership, and developing common practices/ solutions on Infrastructure.

#LI-REMOTE 

Qualifications

Your Skills & Experience:

  • 9 to 12 years of experience in DevOps with a bachelor's in engineering/Technology Or master's in engineering/Computer Applications
  • Expertise in DevOps & Cloud tools:
  • Cloud (Azure, GCP)
  • Version Control (Git, Gitlab, GitHub)
  • Hands-on experience in Container Infrastructure ( Docker, Kubernetes, Hosted solutions)
    • Ability to define container-based environment topology following principles of designing a well-architected framework.
    • Be able to Design and implement advanced aspects using Service Mesh technologies like Istio, Linkerd, Kuma, etc
  • Infrastructure Automation (Chef/Puppet/Ansible, Terraform, ARM, Cloud Formation)
  • Build tools (Ant, Maven, Make, Gradle)
  • Artifact repositories (Nexus, JFrog Artifactory)
  • CI/CD tools on-premises/cloud (Jenkins, TeamCity)
  • Monitoring, Logging, and Security (CloudWatch, cloud trail, log analytics, hosted tools such as ELK, EFK, Splunk, Prometheus, OWASP, SAST, and DAST)
  • Scripting languages: Python, Ant, Bash, and Shell
  • Hands-on experience in designing pipelines & pipelines as code.
  • Hands-on experience in end-to-end deployment process & strategy
  • Good exposure to tools and technologies used in building a container-based infrastructure.
  • Hands-on experience of GCP/AWS/AZURE with a good understanding of computing, networks, IAM, Security, and integration services with production knowledge on
    • Implementing strategies for reliability requirements
    • Ensuring business continuity
    • Meeting performance objectives
    • Security requirements and controls
    • Deployment strategies for business requirements
    • Cost optimization etc
  • Responsible for managing Installation, configuration, automation, performance, monitoring, Capacity planning, and Availability Management of various Servers and Databases. An expert in automation skills
  • Knowledge of load balancing, CDN options provided by multiple cloud vendors (E.g. Load balancer and Application gateway in Azure, ELB, and ALB in AWS)
  • Good knowledge of network algorithms on failover and availability.
  • Capability to write complex code
    • e.g., automation of recurring/mundane tasks, OS administration (CPU, memory, network performance troubleshooting), also demonstrates strong troubleshooting skills
  • Demonstrates HA/DR design on Cloud platform as per SLAs/RTO/RPO
  • Good knowledge of migrations tools available with cloud vendors and independent providers

Set Yourself Apart With:

  • The capability of estimating the setup time required for Infrastructure and build & release activities.
  • Good Working Knowledge of the Linux Operating System
  • Skill development, knowledge base creation, and toolset optimization of the Practice.
  • Handling Content Delivery Networks and Performing root cause analysis.
  • Understanding of any one of DBMS like MySQL, Oracle, or No SQL like Cassandra, MongoDB, etc.
  • Capacity Planning and Infrastructure estimations.
  • Working understanding of scripting in any one of the languages: BASH/Python/Perl/Ruby
  • Certification in any cloud (Architect or Professional)

Additional Information

  • Gender Neutral Policy
  • 18 paid holidays throughout the year.
  • Generous parental leave and new parent transition program
  • Flexible work arrangements
  • Employee Assistance Programs to help you in wellness and well being

Company Description

Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value

Set alert for similar jobsSpecialist Infrastructure - DevOps GCP role in Bengaluru, India
Publicis Sapient Logo

Company

Publicis Sapient

Job Posted

a year ago

Job Type

Full-time

WorkMode

On-site

Experience Level

8-12 years

Category

IT Services and IT Consulting

Locations

Bengaluru, Karnataka, India

Qualification

Bachelor or Master

Applicants

Be an early applicant

Related Jobs

Publicis Sapient Logo

Specialist Infrastructure - DevOps Azure

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Join Publicis Sapient as a Cloud & DevOps Specialist to contribute to the transformation of the world. You will design, automate, and support scalable Infrastructure Platforms using DevOps & Cloud tools. Shape the future with your innovative ideas and solutions.

Publicis Sapient Logo

Senior Associate Infrastructure L2 - DevOps GCP

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description The Opportunity: Publicis Sapient is looking for a Cloud & DevOps Engineer to join our team of bright thinkers and enablers. You will use your problem-solving skills, craft & creativity to design and develop infrastructure interfaces for complex business applications. Contribute ideas for improvements in Cloud and DevOps practices, delivering innovation through automation. We are on a mission to transform the world, and you will be instrumental in shaping how we do it with your ideas, thoughts, and solutions. Your Impact OR Responsibilities: Combine your technical expertise and problem-solving passion to work closely with clients, turning complex ideas into end-to-end solutions that transform our client’s businesses. Lead and support the implementation of the Engineering side of Digital Business Transformations with cloud, multi-cloud, security, observability, and DevOps as technology enablers. Responsible for Building Immutable Infrastructure & maintain highly scalable, secure, and reliable cloud infrastructure, which is optimized for performance cost, and compliant with security standards to prevent security breaches Enable our customers to accelerate their software development lifecycle and reduce the time to market for their products or services. #LI-REMOTE  Qualifications Your Skills & Experience: 5 to 9 years of experience in Cloud & DevOps with Full-time Bachelor’s /Master’s degree (Science or Engineering preferred) Expertise in below DevOps & Cloud tools: Expertise in at least one Cloud Must Have GCP (Compute, IAM, VPC, Storage, Serverless, Database, Kubernetes, Pub-Sub, Operations Suit)  Azure (Virtual Machines, Azure Active Directory, Virtual Network, Blob Storage, Functions, Database, Azure Service Bus, Azure Monitor) AWS (EC2, IAM, VPC, S3, Lambda, RDS, SNS, Cloud Watch) Configuration and monitoring DNS, APP Servers, Load Balancer, and Firewall for high-volume traffic Extensive experience in designing, implementing, and maintaining infrastructure as code using preferably Terraform or Cloud Formation/ARM Templates/Deployment Manager/Pulumi Experience Managing Container Infrastructure (On-Prem & Managed e.g., AWS ECS, EKS, or GKE) Design, implement, and Upgrade container infrastructure e.g., K8S Cluster & Node Pools Create and maintain deployment manifest files for microservices using HELM Utilize service mesh Istio to create gateways, virtual services, traffic routing, and fault injection Troubleshoot and resolve container infrastructure & deployment issues Continues Integration & Continues Deployment Develop and maintain CI/CD pipelines for software delivery using Git and tools such as Jenkins, GitLab, CircleCI, Bamboo, and Travis CI Automate build, test, and deployment processes to ensure efficient release cycles and enforce software development best practices e.g., Quality Gates, Vulnerability Scans, etc. Automate Build & Deployment process using Groovy, GO, Python, Shell, PowerShell Implement DevSecOps practices and tools to integrate security into the software development and deployment lifecycle. Manage artifact repositories such as Nexus and JFrog Artifactory for version control and release management. Design, implement, and maintain observability, monitoring, logging, and alerting using the below tools Observability: Jaeger, Kiali, CloudTrail, Open Telemetry, Dynatrace Logging: Elastic Stack (Elasticsearch, Logstash, Kibana), Fluentd, Splunk Monitoring: Prometheus, Grafana, Datadog, New Relic Good to Have: Associate Level Public Cloud Certifications Terraform Associate Level Certification   Additional Information Gender Neutral Policy 18 paid holidays throughout the year. Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.

Publicis Sapient Logo

Senior Associate L1 DE-Big Data GCP

Publicis Sapient

Bangalore Urban, Karnataka, India

Posted: a year ago

Job Description The Senior Associate People Senior Associate L1 in Data Engineering, you will translate client requirements into technical design, and implement components for data engineering solutions. Utilize a deep understanding of data integration and big data design principles in creating custom solutions or implementing package solutions. You will independently drive design discussions to insure the necessary health of the overall solution  Your Impact: Data Ingestion, Integration and Transformation Data Storage and Computation Frameworks, Performance Optimizations Analytics & Visualizations Infrastructure & Cloud Computing Data Management Platforms Build functionality for data ingestion from multiple heterogeneous sources in batch & real-time Build functionality for data analytics, search and aggregation Qualifications Your Skills & Experience: Minimum 2 years of experience in Big Data technologies Hands-on experience with the Hadoop stack – HDFS, sqoop, kafka, Pulsar, NiFi, Spark, Spark Streaming, Flink, Storm, hive, oozie, airflow and other components required in building end-to-end data pipelines. Working knowledge of real-time data pipelines is added advantage. Strong experience in at least the programming language Java, Scala, and Python. Java preferable Hands-on working knowledge of NoSQL and MPP data platforms like Hbase, MongoDB, Cassandra, AWS Redshift, Azure SQLDW, GCP BigQuery, etc. Well-versed and working knowledge with data platform-related services on AWS Bachelor’s degree and year of work experience of 4 to 6 years or any combination of education, training, and/or experience that demonstrates the ability to perform the duties of the position Set Yourself Apart With: Good knowledge of traditional ETL tools (Informatica, Talend, etc) and database technologies (Oracle, MySQL, SQL Server, Postgres) with hands-on experience Knowledge of data governance processes (security, lineage, catalog) and tools like Collibra, Alation, etc Knowledge of distributed messaging frameworks like ActiveMQ / RabbiMQ / Solace, search & indexing, and Microservices architectures Performance tuning and optimization of data pipelines Cloud data specialty and other related Big data technology certifications A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender-Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements Employee Assistance Programs to help you in wellness and well being   Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers.We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting and customer experience with agile engineering and problem-solving creativity.United by our core values and our purpose of helping people thrive in the brave pursuit of next, our 20,000+ people in 53 offices around the world combine experience across truly value.

Publicis Sapient Logo

Manager Data Engineering DE - Big Data Azure

Publicis Sapient

Bengaluru, Karnataka, India

Posted: a year ago

Job Description As Manager, Data Engineering, you will be responsible for translating client requirements into design, architecting, and implementing Azure Cloud-based big data solutions for clients. Your role will be focused on delivering high-quality solutions by independently driving design discussions related to Data Ingestion, Transformation & Consumption, Data Storage and Computation Frameworks, Performance Optimizations, Infrastructure, Automation & Cloud Computing, and Data Governance & Security. The role requires a hands-on technologist with expertise in Big Data solution architecture and with a strong programming background in Java / Scala / Python. Your Impact: Provide technical leadership and hands-on implementation role in the areas of data engineering including data ingestion, data access, modeling, data processing, visualization, design, and implementation. Lead a team to deliver high quality big data technologies-based solutions on Azure Cloud. Manage functional & nonfunctional scope and quality. Help establish standard data practices like governance and address other non-functional issues like data security, privacy, and quality. Manage and provide technical leadership to a data program implementation based on the requirement using agile technologies. Participate in workshops with clients and align client stakeholders to optimal solutions. Consulting, Soft Skills, Thought Leadership, Mentorship etc. People management, contributing to hiring and capability building. #LI-REMOTE  Qualifications Your Skills & Experience: Overall 8+ years of IT experience with 3+ years in Data related technologies, and expertise of 1+ years in data-related Azure Cloud services and delivered at least 1 project as an architect. Mandatory to have knowledge of Big Data Architecture Patterns and experience in the delivery of end-to-end Big Data solutions on Azure Cloud. Expert in programming languages like Java/ Scala and good to have Python Expert in at least one distributed data processing framework: Spark (Core, Streaming, SQL), Storm or Flink, etc. Expert in Hadoop eco-system with Azure cloud distribution and worked at least on one or more big data ingestion tools (Sqoop, Flume, NiFI, etc), distributed messaging and ingestion frameworks (Kafka, Pulsar, Pub/Sub, etc) and good to know traditional tools like Informatica, Talend, etc Should have worked on any NoSQL solutions like Mongo DB, Cassandra, HBase, etc, or Cloud-based NoSQL offerings like DynamoDB, Big Table, etc. Good Exposure in development with CI / CD pipelines. Knowledge of containerization, orchestration, and Kubernetes engine would be an added advantage. Set Yourself Apart With: Certification on Azure cloud platform or big data technologies. Strong analytical and problem-solving skills. Excellent understanding of data technologies landscape/ecosystem. A Tip from the Hiring Manager: Join the team to sharpen your skills and expand your collaborative methods. Make an impact on our clients and their businesses directly through your work. Additional Information Gender Neutral Policy 18 paid holidays throughout the year Generous parental leave and new parent transition program Flexible work arrangements  Employee Assistance Programs to help you in wellness and well being Company Description Publicis Sapient is a digital transformation partner helping established organizations get to their future, digitally-enabled state, both in the way they work and the way they serve their customers. We help unlock value through a start-up mindset and modern methods, fusing strategy, consulting, and customer experience with agile engineering and problem-solving creativity. United by our core values and our purpose of helping people thrive in the brave pursuit of the next, our 20,000+ people in 53 offices around the world combine experience across technology, data sciences, consulting and customer obsession to accelerate our clients’ businesses through designing the products and services their customers truly value.