Vacancy expired!
Hello, My name is
Pavan and I am a Talent Acquisition Specialist at Park Computer Systems, Inc., a Technology Consulting and Staffing company. After reviewing your resume, I believe that you may be a good fit for a job opening with one of our clients. The highlights of the position are: Job Title: Senior Data Engineer Location: San Diego, CA 92127 / (Hybrid – need to go to office once in a month) Duration: 12 Months Contract Job Description:Looking for a Mid-level or Senior Data Engineer to become part of Enterprise Applications organization that manages numerous enterprise-wide solutions like Office365, ServiceNow, Okta, JIRA, Slack, Box, Google, Zoom and our Intranet. Candidate will be joining the Data Engineering team within that organization and work on implementing on and maintaining leading-edge analytic systems like Tableau, Power BI, Snowflake, Workato etc. He/She will be expected to work closely with the rest of the team to identify trends and opportunities for growth through analysis of complex data sets and generate forward looking insights for the business. Responsibilities:- Participate in designing, building, testing and maintaining scalable data analytics systems (pipeline, warehouse, API)
- Build and support data pipelines to extract and transform data for enterprise analytics.
- Support pre-built data structures and data storage systems for optimal querying – Snowflake, AWS Aurora, MySQL.
- Assist the data analysts to create actionable insights into customer acquisition, operational efficiency and other key business performance metrics - Tableau, Power BI, Looker.
- Implement process improvements: monitoring, automating manual processes, optimizing data delivery, recommending infrastructure setup for greater scalability.
- Provide manual/automated monitoring and production support for the entire analytics stack.
- Play role of data warehouse / database administrator as needed.
- Bachelors or Master’s in computer science, Engineering, Mathematics or related
- 6+ professional experience working in the field of data
- 4+ years of experience working extensively with databases & warehouses
- Experience with Snowflake and AWS Aurora are critical
- 3+ years of experience developing data pipelines (ETL/ELT)
- Strong with SQL and experienced with at least one programming language (Python, Java)
- Experience with calling RESTful API services & Big Data technologies in AWS
- Data modeling skills
- Strong written and verbal communication skills
- Ability to work as an Agile team member with minimal supervision
- Proactive, Self-Starter who is willing to get their hands dirty to get tasks completed
- Data Science and AI/ML knowledge or experience helpful
Vacancy expired!