Skip to content. | Skip to navigation

Personal tools

Navigation

You are here: Home / Wiki / Backbone

Backbone

The ProtoGENI Backbone

The ProtoGENI Backbone

Note: Many details of the equipment we've installed in the Backbone are available on the GPO's wiki: Integration.

Overview

We are building a backbone, in partnership with Internet2, as part of our Spiral One effort. This backbone will be built on top of the Internet2 wave infrastructure, will use HP switches to provide VLAN connectivity across that infrastructure, and will include PCs and NetFGPA cards (hosted in the PCs) at backbone sites. The plan for Spiral 1 is to get this hardware installed at 8 colocation sites across the country, with 10Gbps of dedicated bandwidth between sites.

Resources, Slicing, and Isolation

The ProtoGENI backbone will have a 10GB wave on the Internet2 infrastructure. We will run Ethernet on these waves, and slice it with VLANs. Researchers will not be able to program the Ethernet switches directly, but they will be able to select the topology of VLANs to run on top of the infrastructure, and we hope to enable OpenFlow on these switches, allowing experimenters direct control over the forwarding tables in them.

VLANs will prevent slices from seeing each others' traffic. They will not provide QoS-like guarantees for performance isolation; however, we will use three different techniques to prevent over-subscription of these resources. First, request RSpecs and tickets will include the bandwidth to be used by slices; we will track the bandwidth promised to slices, and not over-book it. Second, shared PCs will use host-based traffic shaping to limit slices to the bandwidths they have been promised. Third, for components over which experimenters have full control (and thus host-based limits would not be enforceable), we will use traffic limiting technologies of our backbone switches to enforce limits on VLANs attached to those components.

The PC components will be handled in two different ways: some will be sliced using in-kernel virtualization techniques adopted from PlanetLab and VINI. This allows for a large number of slivers, but provides only limited control over the network stack. In the case of PlanetLab vservers, slivers are unable to see each others' traffic, but share interfaces, and have no control over routing tables, etc. VINI adds a significant amount of network virtualization, allowing slivers to have their own virtual interfaces, which greatly aids slicing via tunnels and VLANs. It also allows slivers control over their own IP routing tables. These technologies provide a little in the way of performance isolation between slivers, but our main strategy will be to do admission control to prevent these components from being overloaded.

Because the slicing techniques listed share a common kernel among slivers, they allow for a large number of slivers, but do not enable disruptive changes to the kernel, such as modifying the network stack. For this reason, a set of our components will be run on exclusive-access basis, in which experimenters will have the ability to replace the operating system, etc. on them. In the future, if someone comes up with a good slicing implementation using Xen, VMWare, or some other more traditional virtual machine, we may consider using that on this set of components.

We do not expect NetFPGAs to be sliceable in the near future, so we intend to allocate them to once slice at a time, and to deploy a number of them (3 to 4 per backbone site) to support simultaneous slices.

Physical Connections

Backbone switches will be connected to the Internet2 wave (DWDM) network (used for many purposes other than GENI) via 10Gbps Ethernet interfaces. The wave network provides a "virtual fiber", so that the switches appear to be directly attached to each other (over a distance of hundreds of miles). Each switch will have two or three 10Gbps interfaces, depending on the out-degree of the Internet2 site. Each switch will have a single 1Gbps copper Ethernet interface to the Internet2 IP network (non-GENI) for connectivity to the outside world.

Each PC and NetFPGA will have a number of interfaces; at least 1Gbps per out-degree of the site, and possibly more (up to four). PCs will have an additional interface for the control plane (eg. remote power cycling and console access) and connectivity to the outside world.

We are investigating possible interconnections with regional networks, which may be done with wave (DWDM) equipment, or Ethernet.

There is more detail on our Backbone Node page.

Measurement Capabilities

Ethernet switches have basic packet counters, and can be configured to "mirror" traffic between ports for collection purposes. The PCs will be able to use standard measurement and capture tools, such as tcpdump. Due to their nature as "programmable hardware", quite a lot of measurement is possible on the NetFPGAs.

Services and Capabilities

We will leverage the Emulab tools for creation and manipulation of VLANs on the backbone, and some Emulab tools for sliver programming and manipulation will be available as well. Our slice embedding service will aid in the selection of backbone paths for researchers who are interested in using the backbone simply for connectivity, rather than having an interest in controlling specific backbone components and routers. We hope to make OpenFlow available on our backbone switches, giving researchers a large amount of control over their forwarding behavior.

IP Addresses

Since our clearinghouse will be running here in Emulab, we'll use our own IP address space for the public interface for the clearinghouse itself, as well as our aggregate/component manager. AMs running at remote sites (ie. other Emulabs that join our clearingouse) will use their own IP space for the AM's own interface. So, we are planning on using our own existing public IPs for the control plane.

In our case, the CMs for the switches in I2 will be run at Utah, so while the switches do need IP addresses for the management plane between them and Utah, they do no need to be public. We are planning to use private IP space for this management plane.

The PC components in the Internet2 colo centers will have connectivity to the Internet2 IP network.

As far as the 'experimental plane' goes, we were planning to mostly continue our practice in Emulab of using private IP space for user topologies. For experiments that require outside connectivity, the PC components will run some service on their public IP interfaces to talk to end users, and on the back end, connectivity along the backbone can use private IP space. (And, of course, some experiments may not run IP at all on the backbone...) For sites that require tunnels between each other and/or the backbone, they will simply use their existing campus IPs outside the tunnel, and private IP space within.