Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Hyderabad Jobs |
Banglore Jobs |
Chennai Jobs |
Delhi Jobs |
Ahmedabad Jobs |
Mumbai Jobs |
Pune Jobs |
Vijayawada Jobs |
Gurgaon Jobs |
Noida Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | Hyderabad |
Education | Not Mentioned |
Salary | Not Disclosed |
Industry | IT - Software |
Functional Area | General / Other Software |
EmploymentType | Full-time |
An experienced Bigdata Spark developer is needed for Data Lake portfolio. .Technical Skills:BIG DATA Developer / SE (Spark Structured Streaming + Spark SQL + Kafka)Databricks, ADFAzure cloud experienceQualifications 3 to 4 years of total IT experience including 2+ years of Big Data experience Experience in Spark Structured Streaming, Kafka, Spark SQL, Scala are must Should possess knowledge on Databricks , ADF Should have experience working on Azure cloud platform Experience in building real time data streaming pipelines from Kafka (or any message broker) using Spark Structured Streaming Hands on functional / Object Oriented programming like Scala, Python or Java 8 prior to Big Data projects. Experience in being able to develop based on design provided within Big Data projects with minimal guidance. Able to understand data models on Hive and HBase for high-performance and storage. Part of atleast one end to end Hadoop data lake projects (streaming real time data). Proficient in Linux/Unix scripting. Experience in Agile methodology is a must. Knowledge of standard methodologies, concepts, best practices, and procedures within HDF Big Data environment Self-starter and able to independently implement the solution. Good problem-solving techniques and communicationJob Description Hands on Big Data Module Lead / SSE role (Spark Structured streaming) Actively participate in scrum calls, story points, estimates and own the development piece. Analyze the user stories, understand the requirements and develop the code as per the design Develop test cases, perform unit testing and integrating testing Support QA Testing, UAT and production deployment Develop batch and real-time data load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations. Perform analysis of vast data stores and uncover insights. Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.Behavioral Skills: Has to carry Right Attitude Key Skills: Job descriptionJob DescriptionWho are we looking for An experienced Bigdata Spark developer is needed for Data Lake portfolio. .Technical Skills:BIG DATA Developer / SE (Spark Structured Streaming + Spark SQL + Kafka)Databricks, ADFAzure cloud experienceQualifications 3 to 4 years of total IT experience including 2+ years of Big Data experience Experience in Spark Structured Streaming, Kafka, Spark SQL, Scala are must Should possess knowledge on Databricks , ADF Should have experience working on Azure cloud platform Experience in building real time data streaming pipelines from Kafka (or any message broker) using Spark Structured Streaming Hands on functional / Object Oriented programming like Scala, Python or Java 8 prior to Big Data projects. Experience in being able to develop based on design provided within Big Data projects with minimal guidance. Able to understand data models on Hive and HBase for high-performance and storage. Part of atleast one end to end Hadoop data lake projects (streaming real time data). Proficient in Linux/Unix scripting. Experience in Agile methodology is a must. Knowledge of standard methodologies, concepts, best practices, and procedures within HDF Big Data environment Self-starter and able to independently implement the solution. Good problem-solving techniques and communicationJob Description Hands on Big Data Module Lead / SSE role (Spark Structured streaming) Actively participate in scrum calls, story points, estimates and own the development piece. Analyze the user stories, understand the requirements and develop the code as per the design Develop test cases, perform unit testing and integrating testing Support QA Testing, UAT and production deployment Develop batch and real-time data load jobs from a broad variety of data sources into Hadoop. And design ETL jobs to read data from Hadoop and pass to variety of consumers / destinations. Perform analysis of vast data stores and uncover insights. Analyze the long running queries and jobs, performance tune them by using query optimization techniques and Spark code optimization.,
Keyskills :
object oriented programmingbig dataqa testingtest casesdata modelsunit testinguser storiesdata streamingproblem solvingrealtime data