Vacancy expired!
- Bachelor’s degree in Computer Science, Mathematics, Engineering, or equivalent work experience
- Some exposure to working with datasets with very high volume of records or objects
- Intermediate level programming experience in Python and SQL
- One year working with Spark or other distributed computing frameworks (may include: Hadoop, Cloudera)
- Two years with relational databases (typical examples include: PostgreSQL Microsoft SQL Server, MySQL, Oracle)
- Some exposure to AWS services including S3, Lambda, one or more AWS database technologies including Redshift, DynamoDB or Athena
- Some exposure to AWS services: DynamoDB, Step Functions
- Experience with contemporary data file formats like Apache Parquet and Avro, preferably with compression codecs, like Snappy and BZip.
- Experience analyzing data for data quality and supporting the use of data in an enterprise setting.
- Some exposure to Machine Learning tools and practices, including DataRobot, Sagemaker or others.
- Some exposure to Google Cloud Platform (Google Cloud Platform) services, which may include any combination of: BigQuery, Cloud Storage, Cloud Functions, Cloud Composer, Pub/Sub and others (this may be via POC or academic study, though practical experience is preferred).
- Streaming technologies (e.g.: Amazon Kinesis, Kafka).
- Graph Database experience (e.g.: Neo4j, Neptune).
- Distributed SQL query engines (e.g.: Athena, Redshift Spectrum, Presto).
- Experience with caching and search engines (e.g.: ElasticSearch, Redis).
- ML experience, especially with Amazon Sagemaker, DataRobot, AutoML.
- IAC coding tools, including CDK, Terraform, Cloudformation, Cloud Build.
- Build and maintain serverless data pipelines in terabyte scale using AWS and Google Cloud Platform services – AWS Glue, PySpark and Python, AWS Redshift, AWS S3, AWS Lambda and Step Functions, AWS Athena, AWS DynamoDB, Google Cloud Platform BigQuery, Google Cloud Platform Cloud Composer, Google Cloud Platform Cloud Functions, Google Cloud Storage and others.
- Integrate new data sources from enterprise sources and external vendors using a variety of ingestion patterns including streams, SQL ingestion, file and API.
- Maintain and provide support for the existing data pipelines using the above-noted technologies.
- Work to develop and enhance the data architecture of the new environment, including recommending optimal schemas, storage layers and database engines including relational, graph, columnar, and document-based, according to requirements.
- Develop real-time/near real-time data ingestion from a range of data integration sources, including business systems, external vendors and partner and enterprise sources
- Provision and use machine-learning-based data wrangling tools like Trifacta to cleanse and reshape 3rd party data to make suitable for use.
- Participate in a DevOps culture by developing deployment code for applications and pipeline services.
- Develop and implement data quality rules and logic across integrated data sources.
- Serve as internal subject matter expert and coach to train team members in the use of distributed computing frameworks and big-data services and tools, including AWS and Google Cloud Platform services and projects.
Vacancy expired!