Job Description:
Gaditek is looking for a Senior Data Engineer for one of his Business Units, Savyour, to join the growing product team. The person will be responsible for expanding and optimizing our data and data pipeline architecture, as well as optimizing data flow and collection for cross-functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Data Engineer will support our different teams on data initiatives and will ensure optimal data delivery throughout ongoing projects.
Once you are here, you will:
- Create and maintain optimal data pipeline architecture,
- Assemble large, complex data sets that meet functional / non-functional business requirements.
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, designing infrastructure for greater scalability, etc.
- Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources.
- Build analytics tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.
- Work with data and analytics experts to strive for greater functionality in our data systems.
What does GADITEK offer you?
At Gaditek, our talent is not just exceptional – it’s world-class! Our unstoppable team of 800+ employees are the best and the brightest, driving innovation across 7 Global SaaS Brands in 6 hottest industries, including Cyber Security, Digital Media, Managed Cloud, Fintech, E-Commerce, Web.3.0, and Venture Building as a Service.
Our team is made up of dreamers, doers, and all-around rockstars who are committed to making a difference. As a result, we’re among the best companies to work for, with a plethora of benefits and an amazing culture.
Relevant Experience:
- Graduate degree in Computer Science
- Minimum of 3+ years of experience in marketing technology, data and automation systems.
- Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
- Experience building and optimizing data pipelines, architectures and data sets.
- Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
- Strong analytic skills related to working with both structured and unstructured datasets.
- Build processes supporting data transformation, data structures, metadata, dependency and workload management.
- A successful history of manipulating, processing and extracting value from large disconnected datasets.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Hands-on experience on Google Cloud, DBT, Data Pipeline and Workflow Management tools.
- Experience with Google cloud services: Bigquery, Cloud Storage, Cloud Run, Cloud Composer, data flow,
- Experience with Google Cloud SDK, and batch APIs.
- Experience with object-oriented/object function scripting languages: Python, Java.
APPLY HERE