Job Details

ID #17277448
State Texas
City Addison
Job type Permanent
Salary USD TBD TBD
Source Bank Of America
Showed 2021-07-25
Date 2021-07-24
Deadline 2021-09-22
Category Security
Create resume

Hadoop Developer

Texas, Addison, 75001 Addison USA

Vacancy expired!

Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for developing and delivering data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are familiar with the data engineering practices of the bank.

Required Skills:• Experience working with Hadoop/Big Data and Distributed Systems• Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml• Experience working with SPARK Structured steaming is a plus• Working experience with Spark,Oozie, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, Cloudera or Hortonworks, Elastic Search, Kibana, etc.• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few• Performance tuning experience with spark /MapReduce and/or SQL jobs• Experience and proficiency with Linux operating system is a must• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle• Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.• Experience working with Jenkins and Jar management• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets• Ability to comprehend customer requests and provide the correct solution• Strong analytical mind to help take on complicated problems• Desire to resolve issues and dive into potential issues• Ability to adapt and continue to learn new technologies is important

Enterprise Role Overview - Responsible for developing and delivering data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, performance analysis, research and troubleshooting. Familiar with the data engineering practices of the bank. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Understands and utilizes basic architecture components in solution development. Codes solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Works across development teams to understand and aid in the delivery of data requirements Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Individual contributor.

Job Band:H5

Shift:1st shift (United States of America)

Hours Per Week:40

Weekly Schedule:

Referral Bonus Amount:0 >

Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for developing and delivering data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are familiar with the data engineering practices of the bank.

Required Skills:• Experience working with Hadoop/Big Data and Distributed Systems• Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml• Experience working with SPARK Structured steaming is a plus• Working experience with Spark,Oozie, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, Cloudera or Hortonworks, Elastic Search, Kibana, etc.• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few• Performance tuning experience with spark /MapReduce and/or SQL jobs• Experience and proficiency with Linux operating system is a must• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle• Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.• Experience working with Jenkins and Jar management• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets• Ability to comprehend customer requests and provide the correct solution• Strong analytical mind to help take on complicated problems• Desire to resolve issues and dive into potential issues• Ability to adapt and continue to learn new technologies is important

Enterprise Role Overview - Responsible for developing and delivering data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, performance analysis, research and troubleshooting. Familiar with the data engineering practices of the bank. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Understands and utilizes basic architecture components in solution development. Codes solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Works across development teams to understand and aid in the delivery of data requirements Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Individual contributor.

Job Band:H5

Shift:1st shift (United States of America)

Hours Per Week:40

Weekly Schedule:

Referral Bonus Amount:0

Job Description:

Come join an exciting team within Global Information Security (GIS). Cyber Security Technology (CST) is a globally distributed team responsible for cyber security innovation and architecture, engineering, solutions and capabilities development, cyber resiliency, access management engineering, data strategy, deployment maintenance, technical project management and information technology security control support.

This role is responsible for developing and delivering data solutions to accomplish technology and business goals. Key responsibilities include code design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. They work with stakeholders, Product Owners and Software Engineers to aid in the implementation of data requirements, analyze performance, conduct research and troubleshoot any issues. These individuals are familiar with the data engineering practices of the bank.

Required Skills:• Experience working with Hadoop/Big Data and Distributed Systems• Strong SQL Skills - one or more of MySQL, HIVE, Impala, SPARK SQL• Data ingestion experience from message queue, file share, REST API, relational database, etc. and experience with data formats like json, csv, xml• Experience working with SPARK Structured steaming is a plus• Working experience with Spark,Oozie, Sqoop, Kafka, MapReduce, NoSQL Database like HBase, SOLR, Cloudera or Hortonworks, Elastic Search, Kibana, etc.• Hands on programming experience in at least one of Scala, Python, PHP, or Shell Scripting, to name a few• Performance tuning experience with spark /MapReduce and/or SQL jobs• Experience and proficiency with Linux operating system is a must• Experience in end-to-end design and build process of Near-Real Time and Batch Data Pipelines• Experience working in Agile development process and deep understanding of various phases of the Software Development Life Cycle• Experience using Source Code and Version Control systems like SVN, Git, Bit Bucket etc.• Experience working with Jenkins and Jar management• Self-starter who works with minimal supervision and the ability to work in a team of diverse skill sets• Ability to comprehend customer requests and provide the correct solution• Strong analytical mind to help take on complicated problems• Desire to resolve issues and dive into potential issues• Ability to adapt and continue to learn new technologies is important

Enterprise Role Overview - Responsible for developing and delivering data solutions to accomplish technology and business goals. Codes design and delivery tasks associated with the integration, cleaning, transformation and control of data in operational and analytics data systems. Works with stakeholders, Product Owners, and Software Engineers to aid in the implementation of data requirements, performance analysis, research and troubleshooting. Familiar with the data engineering practices of the bank. Contributes to story refinement/defining requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Understands and utilizes basic architecture components in solution development. Codes solutions to integrate, clean, transform and control data in operational and/or analytics data systems per the defined acceptance criteria. Works across development teams to understand and aid in the delivery of data requirements Assembles large, complex data sets that meet functional / non-functional requirements. Builds processes supporting data transformation, data structures, metadata, data quality controls, dependency and workload management. Defines and builds data pipelines that enable faster, better, data-informed decision-making within the business. Contributes to existing test suites (integration, regression, performance), analyzes test reports, identifies any test issues/errors, and triages the underlying cause. Documents and communicates required information for deployment, maintenance, support, and business functionality. Adheres to team delivery/release process and cadence pertaining to code deployment and release Identifies gaps in data management standards adherence and works with appropriate partners to develop plans to close gaps. Individual contributor.

Shift:1st shift (United States of America)

Hours Per Week:40

Learn more about this role

Vacancy expired!

Subscribe Report job