skillindiajobs
Hyderabad Jobs
Banglore Jobs
Chennai Jobs
Delhi Jobs
Ahmedabad Jobs
Mumbai Jobs
Pune Jobs
Vijayawada Jobs
Gurgaon Jobs
Noida Jobs
Oil & Gas Jobs
Banking Jobs
Construction Jobs
Top Management Jobs
IT - Software Jobs
Medical Healthcare Jobs
Purchase / Logistics Jobs
Sales
Ajax Jobs
Designing Jobs
ASP .NET Jobs
Java Jobs
MySQL Jobs
Sap hr Jobs
Software Testing Jobs
Html Jobs
IT Jobs
Logistics Jobs
Customer Service Jobs
Airport Jobs
Banking Jobs
Driver Jobs
Part Time Jobs
Civil Engineering Jobs
Accountant Jobs
Safety Officer Jobs
Nursing Jobs
Civil Engineering Jobs
Hospitality Jobs
Part Time Jobs
Security Jobs
Finance Jobs
Marketing Jobs
Shipping Jobs
Real Estate Jobs
Telecom Jobs

Data & MLOPs Platform Engineer (Python)

1.00 to 3.00 Years   Hyderabad   23 Mar, 2022
Job LocationHyderabad
EducationNot Mentioned
SalaryNot Disclosed
IndustryBanking / Financial Services
Functional AreaGeneral / Other Software
EmploymentTypeFull-time

Job Description

    Our Global Technology Infrastructure (GTI) group is a team of innovators who love technology as much as you do. Together, you ll use a disciplined, innovative, and business focused approach to develop a wide variety of high-quality products and solutions. You ll work in a stable, resilient, and secure operating environment where you and the products you deliver will thrive.The Technology Reference Data team is seeking a MLOps engineer for the GTI Data Lake platform. GTI Data Lake ingests, curates and makes available to our consumers all types of technology data, including events, logs, metrics, alerts, reference and synthetic data generated by our on and off premises infrastructure. Through careful data curation and various analytics and business intelligence facilities, this data platform provides insights into various operational metrics, issues, controls/compliance, and improvement opportunities, as well as supplying highly curated data to power our ML/AI operations.The MLOps Engineer will be an agent of delivery, who builds the data plumbing that will exploit analytical and big data engineering tools for the productionization of real time, high-throughput, high-volume, data transformation pipelines that enable the firm s observability and drive AI-ops. They will work hand in hand with our data scientists and infrastructure engineers to facilitate the delivery of solutions at an industrial scale. The successful candidate will possess strong foundational software engineering, and big data background, as well as a functional understanding of data science workflows such that they understand and are able to implement strategic technical solutions. They must be able to take a holistic view of a business problem or challenge, and work with various technical groups to get the information necessary to develop and drive solutions to production. Responsible for implementing technical direction, understand the goals of our data scientists, applying best practices to optimize one-off data science solutions into robust, resilient, scalable solutions. Duties
    • Review, understand, optimize, and automate existing one-off data transformation code (which feed ML models) into discreet, scalable tasks
    • Shape the development of a real time data transformation pipeline and platform features to enable continued MLOps maturity
    • Work with internal clients to fully understand data content and intent of workflow and solution
    • Gather, understand and document detailed technical requirements using appropriate tools and techniques disseminate and educate others
    • Contribute to specifying the underlying infrastructure, SDKs and platform we are building to support bespoke data transformation pipelines and enable predictive models to be easily productionized and run at scale.
    Skills & Experience Required Leadership
    • At least 5 years professional experience as a software developer, including working on projects with larger teams
    • Experience in multi-disciplinary teams, with an end-to-end view of systems and processes
    • Minimum of 1 year experience working in agile development methodologies like XP, Kanban or Scrum.
    • Able to identify, shape, and drive stories to completion
    Data Engineering & Integration
    • Proficient in Python and experience using Jupyter Notebooks
    • Hands-on experience using Spark (in Python) to develop high volume ETL pipelines
    • Experience in developing CI/CD solutions for automating builds and deployment
    • Hands-on experience with containerization (Docker, Kuberbetes, etc.).
    • Hands-on experience with workflow management tools such as Airflow
    • MLFlow / KubeFlow experience is a bug plus
    • Hands-on experience of data analysis and data processing using Java, Python, or R a big plus
    • Experience of Data visualization tools, e.g. Matplotlib, Seaborn, ggplot2 also a plus
    • Strong understanding of RESTful APIs and / or data streaming a big plus
    • Experience on a variety of No-SQL database technologies like Cassandra, HBase, Elastic beneficial
    • Experience working in public cloud environments such as AWS, Azure, and/or Google Cloud also a plus
    • Required experience of modern version control (Github, Bitbucket)
    • Solid grasp of basic networking technologies and infrastructure is preferred but not required
    Requirements & Problem Solving
    • Experience of producing documentation/specifications for technical solutions
    • Keen to learn and continually develop skillset, collaborate and express ideas to make improvements
    • Demonstrates a passion for using data to solve problems and deliver value
    • Strong problem-solving skills and ability to logically analyze complex requirements, processes and systems
    ,

Keyskills :
javacssdanishformsgatewaybig datadata sciencedata analysisdata curationreference datadata streamingdata processingproblem solvingversion controldata engineeringagile development

Data & MLOPs Platform Engineer (Python) Related Jobs

© 2020 Skillindia All Rights Reserved