Vacancy expired!
- Perform project analysis and development tasks of increasingly complex nature which may require extensive research and analysis.
- Make design and technical decisions for application and ensure high performance of the application.
- Determining methods and procedures on new tasks, establishing these for the assignment and coordinating activities with other employees while leading a small team and demonstrating a good leadership within the team
- Work in an agile development environment and ensures process/policy compliance as per organizations guidelines.
- Collaborate with leaders, business analysts, project managers, IT architects, technical leads and other developers, along with internal and external customers, to understand requirements and develop needs according to business requirements
- Supports code deployments and configuration changes to production and non-production systems, following established procedures
- Be a thought leader, understand the latest trends and capabilities to implement modern and successful solutions
- Contributing to your BU/Practice by
- Documenting your learnings from the current work and engaging in the external tech community by writing blogs, contributing in Github, Stack overflow, meetups / conferences etc.
- Keep updated on the latest technologies with technology trainings and certifications
- Actively participate in organization level activities and events related to learning, formal training, interviewing, special projects etc.
- Minimum 7+ years of experience in design, development, and deployment of large-scale, distributed, and cloud-deployed software services.
- Bachelor’s in Computer Science or related disciplines
- Must have AWS Data Engineer with Databricks and Deltalake experience.
- Must have deep understanding of Spark, AWS (specifically Redshift, S3, Glue, Athena) and should be having practical exposure on executing multiple projects in Deltalake
- Must have been part of minimum 2 end to end big data projects and must have handled defined modules independently.
- Expert in SQL and good with data modelling for relational, analytical and big data workloads.
- Advanced programming skills with Python, Scala or Java.
- Strong knowledge of data structures, algorithms, & distributed systems.
- Strong experience and deep understanding of Spark internals.
- Expert in Hive.
- Hand on experience with one of the cloud technologies (AWS, Azure, Google Cloud Platform).
- Hands on experience with at least one NoSQL database (HBase, Cassandra, MongoDB etc).
- Experience in working with both batch and streaming datasets.
- Knowledge of at least one ETL tool like Informatica, Apache NiFi, Airflow, DataStage etc.
- Experience in working with Kafka or related messaging queue technology.
- Hands on experience in writing shell scripts for automating processes.
- Willingness to learn and adapt.
- Delivery focused and willingness to work in a fast-paced work environment.
- Takes initiative and responsibility for delivering complex software.
- Knowledge of building REST API end points for data consumption.
- Excellent oral and written communication is a must.
- Well versed with Agile methodologies and experience in working with scrum teams.
- Master's in Computer Science or related disciplines
- Experience building self-service tools for analytics would be plus.
- Knowledge of ELK stack would be a plus.
- Knowledge of implementing CI/CD on the pipelines is a plus.
- Knowledge of Containerization (Docker/Kubernetes) will be plus.
- Knowledge of building RESTful services would be an added advantage.
- Experience working with one of the popular Public Cloud based platforms.
Vacancy expired!