Vacancy expired!
The Data Pipeline and consumption team of Decision Sciences and Transformation (DSaT) is seeking a highly motivated Senior Big Data Engineer to start or continue an IT career on the Big Data Platform project. Teaming up with architects, scrum masters, leads, managers, you will work in an Agile environment to make the data on the Big Data Platform accessible for the needs of the organization. As a Big Data Engineer, you will be working on Data Ingestion activities to bring large volumes of data in to our Big Data Lake in IBM's SoftLayer Cloud. The candidate plays a vital role in building new data pipelines from various structured and unstructured sources into Hadoop. The qualified candidate will be working in a heterogeneous environment and requires excellent technical skills, a strong desire to learn, good communication skills, attention to details, and the ability to learn new technologies. You will participate in the entire software development lifecycle by writing and executing test plans, finding solutions to issues during development and after deployment.Candidate Qualifications:
- 2+ years working with Java based applications and open source technologies
- 1+ years of experience in working with Spring XD, Spring Cloud Data Flow, RabbitMQ, Kafka or other messaging technologies
- Prior experience in working with Hadoop technologies such as HBase, Hive, Oozie, Pig, & Spark is a plus
- Proficiency in scripting in either Python, Perl, Ruby, Shell Scripting
- 1+ year of Production Support and operationalization of Applications
- Experience with Change Data Capture (CDC) technologies and relational databases such as MS SQL, Oracle and DB2
- Knowledge and understanding of SDLC and Agile/Scrum procedures, processes and practices is required
- Experience with mentoring software developers
- 5+ years working with Java based applications and open source technologies
- 2+ years of experience in working with Spring XD, Spring Cloud Data Flow, RabbitMQ, Kafka or other messaging technologies
- Prior experience in working with Hadoop technologies such as HBase, Hive, Oozie, Pig, & Spark is a plus
- Performance tuning of Java applications and messaging technologies is a plus
- 1+ year of scripting in either Python, Perl, Ruby, Shell Scripting
- 3+ years of Production Support and operationalization of Applications
- Prior experience with Docker, Kubernetes or other container tools is a plus
- Prior experience with Change Data Capture (CDC) technologies and relational databases such as MS SQL, Oracle and DB2 is a plus
- Knowledge and understanding of SDLC and Agile/Scrum procedures, processes and practices is required
- Experience with mentoring software developers
- Bachelor's or equivalent degree is required
- Premier Medical, Dental and Vision Insurance with no waiting period
- Paid Vacation, Sick and Parental Leave
- 401(k) Plan with Profit Sharing
- Tuition Reimbursement
- Paid Training and Licensures