Platform Engineer

  • Mumbai
Job Description:

Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms. Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset. Design, architect, and build self - service, self - healing, synthetic monitoring and alerting platform and tools. Automate the development and test automation processes through CI / CD pipeline(Git, Jenkins, SonarQube, Artifactory, Docker containers). Build container hosting - platform using Kubernetes. Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value.

Required Experience, Skills and Qualifications:
  • Excellent written and verbal communication skills and a good listener.
  • Proficiency in deploying and maintaining Cloud based infrastructure services (AWS, GCP - good hands-on experience in at least one of them)
  • Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks.
  • Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform).
  • Experience with relational SQL and NoSQL databases, including Postgres and Cassandra.
  • Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform)
  • Strong Linux System Admin Experience with excellent troubleshooting and problem-solving skills
  • Hands-on experience with languages (Bash/Python/Core Java/Scala)
  • Experience with CI/CD pipeline (Jenkins, Git, Maven etc)
  • Experience integrating solutions in a multi-region environment
  • Self-motivate, learn quickly and deliver results with minimal supervision
  • Experience with Agile/Scrum/DevOps software development methodologies.
Good to have:
  • Experience in setting-up Elastic Logstash Kibana (ELK) stack
  • Having worked with large scale data.
  • Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc.
  • Previously experience on working with distributed architectures like Hadoop, Map-reduce etc