Vacancy expired!
At Lowe's Data Analytics and Computational intelligence, we run large Big Data Platforms for data processing, ML, data analytics. If you are passionate about setting up Big Data Platforms On-Prem and on Cloud, working on debugging issues, Platform triage etc. this is for you. You would be challenged with managing multiple Big data platforms on Hadoop, exploring new technologies, set up of data storage platforms involving Open Source technology, integrating the data platforms with Catalog, BI tools, setting up platform for ML model training, deployment etc.
Needs to be motivated towards learning, exploring technologies, collaborating with users and setup a modern platform for Data Processing, Storage, Data Preparation, Training and deployment.Job Summary:The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver modules, stable application systems, and Platform solutions. This includes developing, configuring, or modifying integrated business and/or enterprise infrastructure or application solutions within various computing environments. This role facilitates the implementation and maintenance of business and enterprise Platform solutions to ensure successful deployment of released applications.Key Responsibilities:- Translates business requirements and specifications into logical program designs, modules, stable application systems, and data solutions with occasional guidance from senior colleagues; partners with Product Team to understand business needs and functional specifications
- Develops, configures, or modifies integrated business and/or enterprise application solutions within various computing environments by designing and coding component-based applications using various programming languages
- Conducts the implementation and maintenance of complex business and enterprise data solutions to ensure successful deployment of released applications
- Supports systems integration testing (SIT) and user acceptance testing (UAT), provides insight into defining test plans, and ensures quality software deployment
- Participates in the end-to-end product lifecycle by applying and sharing an in-depth understanding of company and industry methodologies, policies, standards, and controls
- Understands Computer Science and/or Computer Engineering fundamentals; knows software architecture and readily applies this to Platform solutions
- Automates and simplifies team development, test, and operations processes; develops conceptual, logical and physical architectures consisting of one or more viewpoints (business, application, data, and infrastructure) required for business solution delivery
- Solves difficult technical problems; solutions are testable, maintainable, and efficient
- Responsibilities of Hadoop admin include - deploying a Hadoop cluster, maintaining a Hadoop cluster, adding and removing nodes using cluster monitoring tools like Ganglia Nagios or Cloudera Manager, configuring the Name Node high availability and keeping a track of all the running Hadoop jobs.
- Implementing, managing and administering the overall Hadoop infrastructure.
- Work closely with the data engineer, network, BI and application teams to make sure that all the big data applications are highly available and performing as expected.
- Hadoop admin is responsible for capacity planning and estimating the requirements for lowering or increasing the capacity of the Hadoop cluster.
- Setup/migrate job between on-prem to cloud and other way. Should have knowledge of cloud networking and IAM
- Bachelor's degree in Engineering, Computer Science, CIS, or related field (or equivalent work experience in a related field)
- 2 years of experience in Data, BI or Platform Engineering, Data Warehousing/ETL, or Software Engineering
- 1 year of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
- Master's degree in Computer Science, CIS, or related field
- 2 years of IT experience developing and implementing business systems within an organization
- 4 years of experience working with defect or incident tracking software
- 4 years of experience with technical documentation in a software development environment
- 2 years of experience working with an IT Infrastructure Library (ITIL) framework
- 2 years of experience leading teams, with or without direct reports
- Experience with application and integration middleware
- Experience with database technologies
- 2-3 years of experience in Hadoop, NO-SQL, RDBMS or any Cloud Bigdata components, Teradata, MicroStrategy
- Expertise in Python, SQL, Scripting, Teradata, Hadoop utilities like Sqoop, Hive, Pig, Map Reduce, Spark, Ambari, Ranger, Kafka or equivalent Cloud Bigdata components
Vacancy expired!