skillindiajobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Datalake Mphasis Drive

3.00 to 4.00 Years   Hyderabad   09 Aug, 2021
Job LocationHyderabad
EducationNot Mentioned
SalaryNot Disclosed
IndustryIT - Software
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

An experienced Bigdata Spark developer is needed for Data Lake portfolio. .Technical Skills:BIG DATA Developer / SE (Spark Structured Streaming + Spark SQL + Kafka)Databricks, ADFAzure cloud experienceQualifications 3 to 4 years of total IT experience including 2+ years of Big Data experience Experience in Spark Structured Streaming, Kafka, Spark SQL, Scala are must Should possess knowledge on Databricks , ADF Should have experience working on Azure cloud platform Experience in building real time data streaming pipelines from Kafka (or any message broker) using Spark Structured Streaming Hands on functional / Object Oriented programming like Scala, Python or Java 8 prior to Big Data projects. Experience in being able to develop based on design provided within Big Data projects with minimal guidance. Able to understand data models on Hive and HBase for high-performance and storage. Part of atleast one end to end Hadoop data lake projects (streaming real time data). Proficient in Linux/Unix scripting. Experience in Agile methodology is a must. Knowledge of standard methodologies, concepts, best practices, and procedures within HDF Big Data environment Self-starter and able to independently implement the solution. Good problem-solving techniques and communicationJob Description Hands on Big Data Module Lead / SSE role (Spark Structured streaming) Actively participate in scrum calls, story points, estimates and own the development piece. Analyze the user stories, understand the requirements and develop the code as per the design Develop test cases, perform unit testing and integrating testing Support QA Testing, UAT and production deployment Develop batch and real-time data load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations. Perform analysis of vast data stores and uncover insights. Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.Behavioral Skills: Has to carry Right Attitude Key Skills: Job descriptionJob DescriptionWho are we looking for An experienced Bigdata Spark developer is needed for Data Lake portfolio. .Technical Skills:BIG DATA Developer / SE (Spark Structured Streaming + Spark SQL + Kafka)Databricks, ADFAzure cloud experienceQualifications 3 to 4 years of total IT experience including 2+ years of Big Data experience Experience in Spark Structured Streaming, Kafka, Spark SQL, Scala are must Should possess knowledge on Databricks , ADF Should have experience working on Azure cloud platform Experience in building real time data streaming pipelines from Kafka (or any message broker) using Spark Structured Streaming Hands on functional / Object Oriented programming like Scala, Python or Java 8 prior to Big Data projects. Experience in being able to develop based on design provided within Big Data projects with minimal guidance. Able to understand data models on Hive and HBase for high-performance and storage. Part of atleast one end to end Hadoop data lake projects (streaming real time data). Proficient in Linux/Unix scripting. Experience in Agile methodology is a must. Knowledge of standard methodologies, concepts, best practices, and procedures within HDF Big Data environment Self-starter and able to independently implement the solution. Good problem-solving techniques and communicationJob Description Hands on Big Data Module Lead / SSE role (Spark Structured streaming) Actively participate in scrum calls, story points, estimates and own the development piece. Analyze the user stories, understand the requirements and develop the code as per the design Develop test cases, perform unit testing and integrating testing Support QA Testing, UAT and production deployment Develop batch and real-time data load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations. Perform analysis of vast data stores and uncover insights. Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.,

Keyskills :
object oriented programmingbig dataqa testingtest casesdata modelsunit testinguser storiesdata streamingproblem solvingrealtime data

Datalake Mphasis Drive Related Jobs

© 2020 Skillindia All Rights Reserved