Vacancy expired!
Roles and responsibilities Design, develop, test, deliver, and operate software solutions for big data and AI problems Apply test-driven and Agile software development methodologies Develop solutions that can scale in system and data size Ensure high reliability for production systems Innovate constantly to improve quality, efficiency, reliability Collaborate with customers to define and meet requirements
Qualifications/Requirements BS or advanced degree in Computer Science or related field, or equivalent experience 4+ years' experience working on Hadoop, with emphasis on in-depth experience in Oozie, Spark, Hive. Experience with Hadoop version3 or HBase is a plus. Proficiency in at-least two of the following languages: Java, Scala, Python, R Experience with SQL for OLAP/data warehouse applications Experience with building and supporting distributed system applications at scale (10s to 100s of servers). Proficiency in at-least one of the following languages: Java, Scala, Python, R Experience with building and supporting distributed systems Fluency in Unix command line tools and bash is preferred Experience with REST services a plus Experience with streaming and messaging tools (like Kafka, JMS, Avro, Protobuf) is a plusVacancy expired!