Vacancy expired!
Your Opportunity
We help our clients plan for their future and they are passionate about the tools and experiences we provide. We collaborate with user experience and design, business and technology partners across the enterprise to build software experiences our users' are passionate about. What you are good at Job Description : Functional and non-function requirement gathering, low-level and high-level design, and architect solution. Develop, build, deploy, and maintain code for batch and near real-time streaming ingestion, curation and analytic platforms to handle terabytes of data using Talend ETL pipelines, Hadoop/Big data Ecosystem, MapR-FS/Streams, Spark Batch, Spark Streaming , Java/J2EE, Hive, HBase, Teradata, RabbitMQ, IBMMQ, Oracle, Python and UNIX shell scripting. Building Big Data Hub, Data warehouse, and analytical architecture, framework jobs on Hadoop for advanced business analytics, reporting and Machine Learning. Develop logical and physical data models. Provide strategic collaboration with high level business subject matter experts/POs/cross functional teams to establish the implementation scope. Convert business requirements, use cases to technical solutions and develop most complex streaming ingestion jobs. Troubleshoot and provide emergency production fixes for complex business critical issues. What you have Requirements: Master's degree or foreign equivalent in Business Administration, Computer Science, Engineering or a related field and 6 years of experience in the job offered or a related occupation. Experience and/or education must include:Vacancy expired!