Skip to content. | Skip to navigation

Personal tools


You are here: Home / Wiki / Viniveritas


Points about the architecture.

Home page:

Mike notes:

Points about the architecture.

  • Links between virtual router nodes in VINI are intended to mirror the physical links they run over; i.e., they are not intended to be transparent, robust links between nodes.
  • Existing implementations use IP-encapsulated ethernet protocols.
  • Has mechanisms for routing "real" traffic through the VINI network (i.e., to direct traffic into and out of the network).
  • Able to inject network events such as link failures. However, there are no specific tools mentioned in either paper for doing this.

PL-VINI or VINI version 1.

Runs on largely unmodified PlanetLab kernel and nodes (possibly changes to support tun/tap device?)

Each sliver in a PL-VINI slice has two components at user level:

  • the data plane: a click router instance consisting of UDP tunnels (home brew implementation?) to other slivers, a local tap interface to inject packets from the local node, a forwarding (routing) table and a switch interface to UML
  • the control plane: a UML (User-mode Linux) instance presenting multiple virtual ethernet interfaces to XORP running within.

In theory, can support forwarding of arbitrary packets (not just IP) depending on the implementation of the forwarding table in Click and the routing protocol in XORP. In practice, the use has been as an "Internet in a Slice" which is IPV4 with node mirroring Internet2 backbone routers.

Packets enter and leave PL-VINI via OpenVPN and NAT.

Clients that "opt in" to PL-VINI use OpenVPN. A client running on an arbitrary internet machine opens a VPN tunnel to an ingress/egress point for PL-VINI (nodes running an OpenVPN server in addition to UML and Click). The server feeds packets into PL-VINI via the local tap interface.

For talking to hosts that have not "opted in", PL-VINI runs NAT within the Click instance at certain egress points. These points serve as proxies that not only allow traffic to get to outside servers, but allow the return traffic to get back in.

Trellis or VINI version 2.

Replaces user-level components of PL-VINI with Linux kernel features. The function of Click as per-instance-forwarding-table-provider is replaced by NetNS in the Linux kernel.

It uses ethernet over GRE for inter-node tunnels which are terminated in the kernel. These endpoints are connected through tc traffic shaping to a bridge device (either the standard Linux bridge or the custom 2-way bridge "shortbridge") and then to virtual ethernet devices which appear inside the vservers.

This optimized implementation can only do IP routing but, they claim, can fall back on a PL-VINI style implementation to get non-IP routing.

Can forward minimum-sized packets at about 67% of raw Linux Gb Ethernet speed.

-- Main.MikeHibler - 14 Nov 2007