Skip to content. | Skip to navigation

Personal tools


You are here: Home / Wiki / Planetlabsourcenotes


This is a set of notes I wrote when I was looking through the PlanetLab source code when they were migrating to PLC v4 in early 2007. Originally was in bas:~johnsond/planetlab/code.notes.


PLC/Methods/ contains a variable with a list of methods, each of which is in a class, named by method name, found in src files in PLC/Methods . PLC/ provides a simple class that basically instantiates the class for whatever XMLRPC method is being invoked, runs it, and hands it back to the caller (probably mostly the webserver).

I've broken down the methods a bit, filtering out deprecated ones, and dividing them according to which roles can call them.

  • ~johnsond/planetlab/plc4.dep -- deprecated methods
  • ~johnsond/planetlab/plc4.nondep.roles -- all non-deprecated methods
  • ~johnsond/planetlab/plc4.nondep.adminonly -- methods that can only be invoked by admins
  • ~johnsond/planetlab/plc4.nondep.roles.notadminonly -- methods that be invoked by more than just admins (perhaps PIs, techs, users)

Unsuprisingly, Peer (i.e., "federation) methods can only be called by admins (except for GetPeerData, which has to be callable by peers for synch). I don't think any of the others really bear any comment... mostly it's enough to read the DB schema and then recognize which roles can call which methods.

As far as the methods we use, the functionality still seems to be the same, or nearly so; it's just that the method names have been changed.

The following are some quick notes I made on a few src bits:

  • planetlab4.sql: describes a database schema; also creates a bunch of views, inserts default data.
  • tools/
    • plcdb.3-4.conf: describes a mapping from the db format for PLC 3 to 4 does the db upgrade
  • uses python distutils.core lib to install PLC python modules and classes
  • apache handler for PLC api
  • simple API test webserver
  • simple command-line PLC api test program
  •, test files... the latter does something with two myplc instances.
  • php/*: generate a php class wrapper for the python PLC API.
  • php/xmlrpc: proxies xmlrpc requests to the webserver to the python code.
  • PLC/
    • basically, the gateway into the API; contains functions that can either process an XMLRPC call, return a callable method based on params.
    • performs several different types of authentication:
      • GPG (" Proposed PlanetLab federation authentication structure."), Session key-based, Boot (for remote nodes checking in with PLC), Password
    • all api functions defined in the Methods/ subdir must extend this class (and define their param list, return val type info). This way, the XMLRPC stuff can type check params, log all method calls, etc.
    • contains methods that help sync up peers. Nothing remarkable, except that there is a translation from remote users to local users. It's not clear to me exactly how the local vs remote person_ids are handled (probably just by inserts into the db, and autoinc on the person_id field), but what does seem clear is that email address continues to be the one true unique identifier. Objects (like persons) must be unique to a single PLC.
    • contains functions related to peers:
      • intra-peer auth, deleting all refs to a peer in local PLC instance, etc
      • also maps a peer to its existence in the db; see below NOTE.
    • NOTE: essentially, there is a python class for each object in the db. Each class contains utility methods for manipulating the object (i.e., for, add_node and remove_node...). Each file provides two classes:
      • one extending Row, which is a class that provides generic ops on single row data,
      • and one extending Table, which provides ops on groups of Rows.
    • Each db object defines the field names (with corresponding type info) in its table.
    • an interface to a sql db; sits on top of python postgres APIs. You can grab fields, edit rows/groups of rows as dicts, then sync the changes back.
    • a little "db filter" based on field names; constructs a sql conditional (but only a few supported operators: is,in,like,=,not).
    • uses python curl interface to validate SSL certs with xmlrpc (apparently the default python xmlrpc transport doesn't)
    • implements a planetlab ipod (udp or icmp), based on hostkey.
    • Note: the rest of the files are rather uninteresting... there are a bunch of files that just wrap the Table and Row abstractions for a specific table (i.e., BootStates, AddressTypes, Addresses...)
    • Methods/
      • Note: This is pretty slick, architecturally. All the RPC ops each get a file, and they generally create an object of the type in the db that they want to operate on (i.e., a Peer, a Slice, a Person...), and typically those objects can do the requested op in a single method call. Pretty good stuff, if nearly everything you do is a database object. For instance, to perform the AddPeer RPC operation, the class instantiates a Peer object, and syncs the object back to the db.


It's based on Drupal, but I'm not sure how complete it is yet. For instance, I don't see any way to update Person objects yet. Other things (like slices and sites) are more complete (in terms of being able to update properties and trigger functionality from the website). Also, the PLCAPI hasn't been checked in (however, this is generated straight out of the build in new_plc_api, so it just has to be copied to the right place on build---it should be ready to go right away).

From my two-second glance at drupal and how planetlab integrates with drupal, I guess that we could do it too... but it'd be quite a lot of work (probably a lot would be trivial, though). I would bet that the worst parts are making drupal accept our authentication system (i.e., with projects and groups), although what I see implies there is a sane way to do this without too much pain.


Nothing else really interesting about myplc... except: contains the xml config file parser use in the PLC API.

Overall Notes

Lots of stuff in the code about myplc peers (presumably, mostly Thierry's contribution). I guess this will be their federation prototype... although I personally don't think it's complete, unless I have missed the support for policy expression---which I believe is utterly necessary for peering. Basically, peering seems like it's all or nothing. At least peering doesn't seem to be transitive!

Often, instead of removing objects from databases (such as users, peers, sites, etc), the object's deleted field is set. Then any indices created on the table specify the condition "where deleted is false".


  • they have code to sync "refresh" peers, but I don't see any code that implements peering policies! everything seems transitive if you're peered with another plc, and that seems stupid. Of course, this also reflects the current state of the GDDs, which don't really have any notion of peering ops policies (which is understandable, since the GDDs also don't seem know how GMC will really work)
  • I wonder if all plc db state gets shared when you peer with another plc. It seems that most of the db would be required (all the account stuff, node info, slice allocations, etc). The only other way to do this would seem to be to have the NodeManager contact the appropriate PLC peer for slice setup/instantiation, instead of always contacting its main parent PLC.

vserver gunk

util-vserver/ has a whole bunch of vserver interface code. There are a few python extensions to support planetlab functionality (i.e.,,, and there is a C interface to vservers that is called by the wrapepr. Most of that stuff is in the python/ subdir, although there's some scheduling/context creation stuff in the lib/planetlab.* files. There's also a virtual shell, which kicks the user into a vserver, then launches their shell from /etc/passwd (src/vsh.c).


The NodeManager lives on each planetlab node. It listens on a localhost tcp port and to a local unix socket for xmlrpc requests. Notice that it's no longer exposed to the world... so we'll have to adapt. From the code, it looks like the way to adapt is to ssh to the node, which will then open a kind of shell (NodeManager/forward_api_calls.c) that reads stdin and forwards to the xmlrpc server on the unix socket. What is unclear to me is how the node's local sshd knows when to invoke this shell vs the vserver shell for any particular login. Perhaps there will be some special sshd proxy method. I suppose I could find it, but it doesn't matter right now. Actually, it looks like there are "normal Unix accounts" (delegate accounts) that start the forwarding shell on login to provide access to the NM API (

The NodeManager is backed by an in-memory python database (just a dict that gets dumped to disk periodically). It stores rspecs (not GENI-style--these are more just resource limits, it looks like), manages sliver state. There are some API calls that seem to imply that slivers will be able to grant "loans", presumably of their own resources to others, but I'm not sure how this will work (the code doesn't seem sure either).

The core of the NodeManager ( imports a bunch of modules, each of which exposes some functionality. fires off the bwlimit services. updates the proper config file and restarts proper as necessary. is technically responsible for sliver maintenance, but it really gets proxied off through (via the register_class method) to, which has the necessary functionality for sliver setup,stop,start. The idea is that a class that extends the Account class binds to a type of "shell", and then a worker thread processes all "account operations" according to the shell tag. For instance, Slivers are Accounts and contain the operations necessary to destroy,create,start slivers. When an RPC comes in, the object (i.e., sliver, user account, etc) is retrieved from the db, and the appropriate config/start/stop/etc method is called on that object. It's a little convoluted, but ok once you get the hang of it. calls to PLC and grabs a node session key. I'm not clear yet on why this is here, but apparently the BootManager calls it.

Overall, the NodeManager isn't nearly as nice as PLC (in terms of code quality), but I guess that's to be expected in some ways.


  • Create(sliver_name)
  • Destroy(sliver_name)
  • Ticket(tkt)
  • Start(sliver_name)
  • Stop(sliver_name)
  • GetEffectiveRSpec(sliver_name) (includes loans)
  • GetRSpec(sliver_name) (excludes loans)
  • GetLoans(sliver_name)
  • SetLoans(sliver_name, loans)


This didn't appear interesting to me, cause it's basically just how nodes checkin securely with PLC. Probably hasn't changed very much for PLC4.

PLC DB Schema


  • table "peers"
    • peer_id,peername,peer_url (url of peer's API), cacert (public SSL cert of peer API server), key (public GPG key used for auth), deleted (?)
  • table "peer_persons", "peer_key", "peer_node", "peer_site", "peer_slice"
    • all seem to map local site ids to peer site ids, for all these objects


  • table "persons"
    • person_id (just a unique, autoinc, bigint "account identifier"), email (email addr),first_name,last_name,deleted,enabled, password (md5 hash),verification_key ("Reset password key"), verification_expires, title,phone,url,bio,date_create,last_updated


  • table "sites"
    • site_id (just a unique, autoinc, bigint), login_base (site slice prefix),name,abbreviated_name, deleted,is_public,max_slices,max_slivers,latitude,longitude, url,date_created,last_updated, peer_id (this field seems to explain which peer hosts this site)
  • table "person_site"
    • person_id (foreign key in persons), site_id (from sites), is_primary (set if is primary account for site)

Addresses: (a whole bunch of uninteresting support for Personal, Shipping, and

Billing mailing addresses).


  • table "key_types": currently 'ssh' is only valid type
  • table "keys" (auth keys -- must be site auth keys (?))
    • key_id,key_type,key,is_blacklisted, peer_id (from which peer)
  • table "person_key" (account auth keys -- there can be multiple keys per person)
    • person_id,key_id

Valid roles: admin,pi,user,tech,node,anonymous,peer

  • table "roles"
  • what are node, anonymous? new ones?
  • peer has this comment:

"xxx not sure this us useful yet"

Node boot states:

  • table "boot_states"


  • table "nodes"
    • node_id,hostname,site_id,boot_state,deleted, model,boot_nonce,version (boot cd version),ssh_rsa_key (host key updated by Boot Manager),key (node key), date_created,last_updated, peer_id (which peer node is from)

Node Groups: there's a simple grouping notion for nodes. This could obviously

be used to express lots of different things... but I don't see much use in the rest of the schema, except that configuration files can be specified per nodegroup.


  • table "slices":
    • slice_id,site_id,peer_id (on which peer),name, instantiation (sliver instantiation method: one of plc-instantiated,not-instantiated,delegated), url,description,max_nodes,creator_person_id,created,expires,is_deleted
  • table "slice_node" (i.e., slivers)
    • slice_id,node_id
  • table "slice_person" (i.e., slice membership)
    • slice_id,person_id
  • table "slice_attribute_types"
    • attribute_type_id,name,description, min_role_id (least powerful role that can set attr), peer_id
  • table "slice_attribute"
    • slice_attribute_id,slice_id,node_id,attribute_type_id,value,peer_id

What appears to be standard stuff:

  • Node config files (tables "conf_files","conf_file_node", "conf_file_nodegroup")
  • Node/Network details
    • tables "network_types" (only currently valid type 'ipv4'), "network_methods" (config methods --- currently valid are static,dhcp,proxy,tap,ipmi,unknown), "nodenetworks" (<node,iface> details like ip,mac,gateway,bwlimit...)
  • Power control units
  • Sessions (looks like stuff for php session ids---drupal needs them in a/the db)
  • Messages (presumably boilerplate email templates)
  • "Events" (these look like they focus on being an incident/bug repository -- there are things like the person responsible, node responsible, call responsible...

-- Main.DavidJohnson - 13 Nov 2007