Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Pune |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | Operations Management / Process Analysis |
EmploymentType | Full-time |
Dear Candidate,Greetings of the day!We are Hiring forBig Data Operations Engineer with prominent organisationRoles and ResponsibilitiesJob Duties and ResponsibilitiesAssist in setting up cloud services, infrastructure, frameworks to deploy data engineering & analytics pipelinesCreate & Manage user permissions for the IT/Business users (IAM Roles/Policies Create/Update capability).Work with the Network/Security team to set up network connections.Solve the Dev/Test/Prod data engineering & analytics pipelines & infrastructure issues & monitor them.Manage Big Data Operations Rally User Stories/Tasks.Skills - Experience and RequirementsExperience in setting up production Hadoop/Spark clusters with optimum configurations.Drive automation of Hadoop deployments, cluster expansion and maintenance operations.Manage Hadoop cluster, monitoring alerts and notification.Job scheduling, monitoring, debugging and troubleshooting.Monitoring and management of the cluster in all respects, notably availability, performance and security.Data transfer between Hadoop and other data stores (incl. relational database).Setup a High Availability/Disaster Recovery environment.Debug/Troubleshoot environment failures/downtime.Performance tuning of Hadoop clusters and Hadoop Map Reduce routines.Experience with Kafka, SPARK etc.Experience working with AWS big data technologies (EMR, Redshift, S3, Glue, Kinesis, Dynamodb, and Lambda)Good knowledge on creation of Volumes, Security group rules, Key pairs, Floating IPs, Images and Snapshots and Deploying Instances on AWS.Experience configuring and/or integrating with monitoring and logging solutions such as syslog, ELK (Elastic, Logstash, and Kibana).Strong UNIX/Linux systems administration skills, including configuration, troubleshooting and automationKnowledge of Airflow, NiFi, Streamsets, etc.Knowledge of container virtualizationRegards,HR Team
Keyskills :
cloud servicesdata engineeringbig datadata operations