An overview of the on-premise network powering waipu.tv

by Tobias Krischer

Our IP TV platform is powered by multiple components. While we heavily rely on the public cloud for most backend services, we also operate an on-premises network that spans across Germany.

Construction works for a fiber connection to our data center

The very core of our product is to bring video content to our users as quickly as possible. As these users live in various regions of Germany (and may also travel), we try to locate our infrastructure closely in networking terms. We achieve this by operating several so-called PoPs (points of presence) in different data centers.

Users starting a stream will connect to a playout server in the closest PoP (closest in terms of convenience, not necessarily the shortest physical distance). If you are interested in learning about all the involved services and decisions, we will describe them in future blog posts. We will focus on the network aspect for this blog post.

The playout server streaming the video content needs to receive the video manifests and chunks from somewhere. We distribute this content between our PoPs over our redundant backbone network. Every PoP consists of a pair of core routers that connect to the other PoPs.

The content we stream to our users comes from multiple sources to ensure reliability. In most cases, we receive multicast streams with the video signal. We transport these streams to our encoding servers. After encoding, we send the streams to the playout servers distributed across all PoPs. As the network architecture differs significantly between these two tasks, we have split them into the ingest network and the playout network.

Equipment and fiber connections in one of our racks.

Let’s return to the playout servers – how does the stream get to the user after all? Video streaming is bandwidth-heavy, even though we try our best to tune codecs and compressions.

We therefore operate interconnections with major German ISPs (Internet Service Providers) and are open to peering with the smaller ones. This allows us to send streams to users with as few hops as possible. We steer the traffic jointly with our peering partners (i.e. our users’ ISPs), so that the video streaming experience for our shared customer is optimized.

The mentioned components need to be operated. We use our own automation system that configures our network equipment based on a custom intent-driven model of our network. This is our “source of truth”.

The source of truth stores all the information needed to build the model. We use NetBox for this purpose. Our continuously running Model-Server detects changes in the source of truth and recreates its internal model.

We have defined roles for our equipment that describe which features should be configured on each device. Specific information like IP addresses is fetched from the source of truth and used for building the model. When the new model version is built, the Model-Server then triggers the configuration rollout if necessary. We use GitLab CI to rollout the configuration to our network equipment.

This workflow allows us to iterate quickly while ensuring correctness and reproducibility.