Vacancy expired!
- 5+ years of experience in big data ecosystem. Example technologies include batch and stream processing (e.g. Spark, Hive, Flink, Beam), analytical engines (e.g. Presto, Druid), search platform (e.g. Solr/Lucene), tooling (e.g. Airflow, Jupyter, Superset, Tableau), and storage format (e.g. Iceberg)
- Excellent verbal and written communication skills, able to collaborate cross-functionally with data science, machine learning, data platform and analytics teams
- Customer-focused mindset, with emphasis on user experience and satisfaction
- Superb problem-solving skills, and able to thrive in a fast-paced and dynamic environment
- Hands-on in designing, building, scaling, and troubleshooting solutions to big data problems
- Must be self-driven, and able to provide advice and support to users to properly integrate with our data platform
- Programming experience in Java, Python, Scala, or similar languages
- Passionate about latest big data technologies, open source community presence is a big plus
- Experience with AWS, Kubernetes, Infrastructure-as-code, and data privacy & compliance is a big plus
- Collaborate with our infrastructure users and product management to ensure success in customer engagement and onboarding of new users.
- Be an advocate of our tech stack, stay on top of technology advancement and explore innovation opportunities.
- Prototype, build, diagnose, fix, improve and automate complex issues across the entire stack to power ETL, analytics and privacy efforts across AI/ML.
- Advise and support other teams on proper integration of our platform, including holding regular brown bag, office hour and training sessions.
- Build relationships with Data Scientists, Product Managers and Software Engineers to understand data needs.
- Establish and fulfill SLAs for supported data tools.
Vacancy expired!