Team for Career Site

Technology

In short

As a data engineer you’ll design data models, build data pipelines, automate builds and deployments using CI/CD, decipher APIs, and leverage SQL to provide timely, clean, tested, documented and well-defined data.

You also act as an enabler for other data domain teams by providing guidance on architectural and design decisions, supporting their data modeling efforts and evangelizing and encouraging our best practices and governance policies.

Your mission

What your first 6 months at On might look like:

1 month:
– Get to know the data landscape and introduce yourself to various data teams you will be interacting with
– Pick up and resolve your first change request
– Participate in code reviews making sure that our code is well designed, tested, robust, secure, compliant, performant and readable

3 months:
– Implement a new service or feature to our codebase
– Act as a subject matter expert in supporting other domain teams data engineering efforts

6 months:
– Introduce an important architectural improvement to our systems
– Seek out for opportunities to simplify and streamline data management systems and processes

Your story

– Working knowledge of building robust, fault-tolerant data ingestion and processing pipelines using technologies like Apache Kafka, Apache Beam, or similar
– Strong programming skills in Python, Scala, or Java, with experience in building scalable, low latency data processing applications
– You are knowledgeable and experienced in SQL programming
– Experience with software engineering concepts/skills – git version control, testing, debugging, research, technical problem solving, continuous learning
– You have experience with data warehousing, data modeling and ELT concepts
– You have experience working with data infrastructure, storage, APIs, data pipelines, observability and workflow orchestration in distributed cloud environments
– You are proactive and communicative
– You are comfortable managing multiple concurrent tasks and have the capacity and willingness to completely own their lifecycle with minimal oversight
– You work organized and agile, focusing on reproducibility and scalability in a dynamic business context
– You are enthusiastic about innovating, excited to continuously learn and comfortable with ambiguity
– You are curious and like to keep up with existing and new technology trends in the data space

Bonus:
– Experience with modern data tools (our data stack is based on GCP and we use BigQuery, Airflow, dbt, Looker & Hex)
– You have experience working with containerization and orchestration technology (Docker, Kubernetes) and infrastructure as a code frameworks (Terraform, Helm)
– You have been actively involved in designing, building, and maintaining scalable, high-performance data pipelines to power real-time customer data analytics and insights

Tech Stack: https://stackshare.io/on/bi-team

Meet the team

The data platform team is responsible for building, maintaining and growing internal data platforms, systems and processes that empower everyone at On to impact business outcomes by making informed decisions.

Job Übersicht
We use cookies to improve your experience on our website. By browsing this website, you agree to our use of cookies.

Anmelden

Anmelden

Vergessenes Passwort

Warenkorb

Warenkorb

Teilen