At the moment ZFS, DTrace or Zones are the poster box features of Solaris. But there will be a fourth feature soon. Since Build 105 it´s integrated (many people will already know which feature i want to describe in this artcle) into Solaris. This feature has the project name Crossbow. It´s the new TCP/IP stack of Opensolaris and was developed with virtualisation in mind from ground up.
Concepts of Crossbow
The Virtualisation part
This part is heavily inspired by this blog entry of Ben Rockwood, but he ommited some parts in the course of his article, so i extended it a little bit.
It´s really easy to create etherstubs and virtual nics.
At first we create two virtual switches. They are called etherstub0 and etherstub1
Okay, now we create virtual nics that are bound to the virtual switch etherstub0. These virtual nics are called vnic1 and vnic0.
Now we do the same with our second virtual switch:
Yes, that´s all … but what can we do with it? For example simulating a complete network in your system. Let´s create a network with two networks, a router with a firewall and nat and a server in each of the network. Obviously we will use zones for this.
A template zone
At first we create a template zone. This zone is just used for speeding up the creation of other zones. To enable zone creation based on ZFS snapshots, we have to create a filesystem for our zones and mount it at a nice position in your filesystem:
Now we prepare a command file for the zone creation. The pretty much the standard for a sparse root zone. We don´t configure any network interfaces, as we never boot or use this zone. It´s just a template as the name alreay states.
Now we create the zone.
We will not boot it as we don´t need it for our testbed.
While waiting for the zone installation to end we can create a few other files. At first you should create a file called site.xml. This files controls which services are online after the first boot. You can think of it like an sysidcfg for the Service Management Framework. The file is rather long, so i won´t post it in the article directly. You can download my version of this file here.
Zone configurations for the testbed
At first we have to create the zone configurations. The files are very similar. The differences are in the zonepath and in the network configuration. The zone servera is located in /zones/servera and uses the network interface vnic2
The zone serverb uses the directory /zones/serverb and is configured to bind to the interface vnic4
The configuration of the router zone is a little bit longer as we need more network interfaces:
To speed up installation we create some sysidconfig files for our zones. Without this files, the installation would “go interactive” and you would have to use menus to provide the configuration informations. When you copy place such a file at /etc/sysidcfg the system will be initialized with the information provided in the file.
I will start with the sysidcfg file of router zone:
After this, we create a second sysidconfig file for our first server zone. I store the following content into a file called servera_sysidcfg:
When you look closely at the network_interface line you will see, that i didn´t specified a default route. Please keep this in mind.
In a last step i create serverb_sysidcfg. It´s the config file for our second server zone
Firing up the zones
After creating all this configuration files, we use them to create some zones. The procedure is similar for all zone. At first we do the configuration. After this we clone the template zone. As we located the template zone in a ZFS filesystem, the cloning takes just a second. Before we boot the zone, we place our configuration files we prepared while waiting for the installation of the template zone.
Now let´s look into the routing table of one of our server:
Do you remember, that i´ve asked you to keep in mind, that we didn´t specified a default route in the sysidcfg? But why have we such an defaultrouter. There is some automagic in the initial boot routine. When a system with a single interfaces comes up without an defaultroute specified in /etc/defaultrouter or without being a dhcp client it automatically starts up the router discovery protocol as specified by RPC 1256. By using this protocol the hosts adds all available routers in the subnet as a defaultrouter.
The rdisc protocol is implemented by the in.routed daemon. It implements two different protocols. The first one is the already mentioned rdisc protocol. But it implements the RIP protocol as well. The RIP protocol part is automagically activated when a system has more than one network interface.
Building a more complex network
Let´s extend our example a little bit…
As you see, you are not bound to a certain numbering scheme. You can call a vnic as you want, as long it´s beginning with letters and ending with numbers.
We don´t have to configure any default router in this sysidcfg. The system boots up with a router and will get it´s routing tables from the RIP protocol.
Okay, we can fire up the zone.
Okay, the next zone is the routerc zone. We bind it to the matching vnics in the zone configuration.
The same rules as for the routerb apply to the routerc</a>. We will rely on the routing protocols to provide a defaultroute, so we can just insert NONE into the sysidcfg
Okay, i assume you already know the following steps. It´s just the same just with other files.
Okay, this is the zone configuration in my tutorial. It´s the zone for serverc
Well … it´s zone startup time again …
Obviously we have to login into the both routing zones and activating forwarding and routing. At first on routerb:
# routeadm -e ipv4-forwarding
# routeadm -e ipv4-routing
# routeadm -u
Afterwards on routerc. The command sequence is identical.
# routeadm -e ipv4-forwarding
# routeadm -e ipv4-routing
# routeadm -u</blockquote>
Now lets login into the console of our server:
As you see, there are two default routers in the routing table. The host receives router advertisments from two routers, thus it adds both into the routing table.
Now let´s have a closer at the routing table of the routerb system.
This system has more than one devices. Thus the in.routed starts up as a RIP capable routing daemon. After a short moment the in.routed has learned enough about the network and adds it´s routing table to the kernel.