Vacancy expired!
- Design, develop, and implement large-scale distributed systems that process large volume of data; focusing on scalability, latency, efficiency, and fault-tolerance in every system that you build
- Take ownership of a vaguely defined major business problem or product vision, translate it into executable technical design and roadmaps to solve the problem or realize the product vision
- Demonstrate technical skills to go very deep or broad in solving classes of problems or creating broadly leverageable solutions
- Own key components across big data platforms, and work with engineers, product managers, and engineering leaders to identify opportunities for business impact
- Participate in setting a vision and objectives for the team in alignment with business and market needs
- Role model for behaviors such as – being data intensive while taking everyday decisions, approach everyday problems with scientific temperament and rigor, having customer-first mind-set, maintaining highest standards for operational and engineering excellence
- Raise the bar by improving the team’s definition of best practices and architecture with deep domain knowledge
- Demonstrate the belief that you can achieve more on a team that the whole is greater than the sum of its parts and rely on others’ candid feedback for continuous improvement
- Strong sense of ownership, focus on quality, team orientation, design thinking, responsiveness, efficiency and innovation
- Ability to work with distributed teams in a collaborative and productive manner
- 6 to 10 years of experience in building large scale products, distributed systems in a high caliber environment, with frameworks and abstractions that are reliable and reusable
- Advanced knowledge of at least one programming language, including Java and/or Scala
- Have worked alongside the pioneers of big data systems such as Hadoop, Hive, Spark, Presto, Kafka and Flink to build out a highly reliable, performant, easy to use software systems for our planet scale of data
- Become proficient of multi-tenancy, resource isolation, abuse prevention, self-serve debuggability aspects of a high performant, large scale services
- Experience in handling and triaging complex production issues
- Solid understanding of building and integrating with stream-based and as well as RESTful APIs
- Prior exposure to Azure, Google Cloud Platform, Kafka, NoSQL based cloud native databases and big data technologies
- Experience with SRE practices, including operational architectures, observability, reliability, availability and scalability
Vacancy expired!