Originally I developed this running on RHEL 5 which was reasonably challenging since it involved a bunch of manual tweaking and hacking on xen infrastructure scripts directly (even finding out what to do proved difficult). I have recently updated the environment to RHEL 6 which
proved to be much easier to configure up.
This picture shows that the lab configuration looks like:
There are three isolated networks cluster, iscsi1 and iscsi2. Another network, appnet, is NAT'ed to the hosts network interface to provide external access if required. Each machine has four network interfaces. All three VM nodes (node 1 - 3) are built by first creating a template node (call it node0) using kickstart. The subsequent nodes are created by cloning node0. To simplify the cloning process the IP addresses for all the clone nodes are assigned using DHCP - this means that there are
no node specific actions to be done.
To create the setup I first set up all the networks using the virtual machine manager. This is a bit tedious to do but not difficult. In RHEL 6 you can also use the VMM to assign addresses to the virbr NICs
that are created when you create an isolated network, this provides the host access to the isolated networks. In RHEL 5 setting up the network interfaces was a bit trickier, first I had to make sure I had enough "dummy" network interfaces by adding:
options dummy numdummies=5
to /etc/modprobe.conf. Then I created a script in /etc/xen/scripts called multi-network-bridge which set up the multiple bridges and the associated dummy interfaces:
#!/bin/sh
/etc/xen/scripts/network-bridge $@ vifnum=0 netdev=eth0 bridge=xenbr0
/etc/xen/scripts/network-bridge $@ vifnum=1 netdev=dummy0 bridge=virbr1
ifconfig dummy0 172.16.50.254 netmask 255.255.255.0 broadcast 172.16.50.255 up
/etc/xen/scripts/network-bridge $@ vifnum=2 netdev=dummy1 bridge=virbr2
ifconfig dummy1 172.17.1.254 netmask 255.255.255.0 broadcast 172.17.1.255 up
/etc/xen/scripts/network-bridge $@ vifnum=3 netdev=dummy2 bridge=virbr3
ifconfig dummy2 172.17.101.254 netmask 255.255.255.0 broadcast 172.17.101.255 up
/etc/xen/scripts/network-bridge $@ vifnum=4 netdev=dummy3 bridge=virbr4
ifconfig dummy3 172.17.201.254 netmask 255.255.255.0 broadcast 172.17.201.255 up
Finally, I modified /etc/xen/xend-config.sxp and replaced argument for the network-script entry with my own multi-network-bridge script so that xend would run my script on start up.
For the isolated networks, iptables was blocking network access to the host. Libvirtd automatically configures iptables to provide the NAT facility for guests. I added a rule to allow the 172.17.0.0/16 network access on any port to allow the guests to communicate with the host on the other ethernet interfaces. Unfortunately you cannot just reload the iptables rules because this flushes the rules added by libvirtd which breaks the virtual networking.
With the networks set up it was then time to kickstart node0, the template to be used to stamp out the other nodes. On my host I set up directory with the contents of the RHEL distro under /var/www/html/rhel-5.6 and pointed the yum host config at this. I installed httpd to support a network kickstart. I dropped a basic kickstart file into /var/www/html/ks and then used:
virt-install --name=node0 --ram=512 --vcpus=1 --disk path=/var/lib/images/node0.img,size=8 --extraargs="ksdevice=eth0 ks=http://172.17.1.254/ks/node0.ks" --mac 00:16:3e:00:00:00 --network network:cluster --network network:appnet --network network:iscsi1 --network network:iscsi2 --location=/var/www/html/rhel-5.6/
Where 172.17.1.254 is the IP address of one of the dummy/virbr-nic interfaces on the host machine. The MAC address comes from the manually allocated xen ethernet range. To make things simple I allocated the MAC addresses with the second to last octet being the node number and the
last octet being the interface number of the VM. This made the scripting of the cloning simpler. With this scheme I configured static DHCP allocations for all the nodes. This can be done using "virsh
net-edit net_name", e.g:
virsh net-edit cluster
to edit the cluster net and add the static definitions in the dhcp section of the xml list this:
<name>cluster</name>
<uuid>f0ddb5c3-7db7-9943-2195-aa0454971d0d</uuid>
<bridge name='virbr2' stp='on' delay='0' />
<ip address='172.17.1.253' netmask='255.255.255.0'>
<dhcp>
<range start='172.17.1.128' end='172.17.1.250' />
<host mac='00:16:3e:00:01:00' ip='172.17.1.1' name='node1' />
<host mac='00:16:3e:00:02:00' ip='172.17.1.2' name='node2' />
<host mac='00:16:3e:00:03:00' ip='172.17.1.3' name='node3' />
</dhcp>
</ip>
</network>
The host entries above are the static address allocations. Once this file is edited I found I had to manually copy the xml file from /etc/libvirt/qemu/networks into /var/lib/libvirt/network/. It seems that the VMM manages the files in both places but virsh doesn't and the changes will not take effect. In RHEL6 I found I could restart libvirtd (service libvirtd restart), then send a SIGHUP to dnsmasq, then restart libvirtd once again. The first restart writes out the dnsmasq control files, then we can kick dnsmasq, then restart libvirtd again because it seems to lose contact with dnsmasq following the SIGHUP which breaks DHCP. I don't know if this process works on RHEL5, I just rebooted after making changes back then.
Once the template node was built I cloned it using:
virt-clone -o node0 --force --mac 00:16:3e:00:01:00 --mac 00:16:3e:00:01:03 --mac 00:16:3e:00:01:01 --mac 00:16:3e:00:01:02 --name node1 --file /var/lib/images/node1.img
Doing a clone is slightly faster than building the node using kickstart, once the node is cloned I set the VM to autostart on boot and start up:
virsh autostart node1
virsh start node1
That's it, a node ready for experimenting with. The labs I was doing called for up to three nodes running, this was a simple matter of creating more clones. I wrote a shell script that manages the building and cloning of the nodes, the command line arguments are similar to those of the Red Hat script used in the actual labs though I have added extensions to build more/different nodes so I can use the lab setup for more than one course.
I found when building the RHEL 6.2 clones that the ethernet interfaces were numbered eth4 to eth7 instead of eth0 to eth3. This was due to a set of udev rules that are intended to attach a particular ethernet interface to a particular device. This makes sense in a real world machine because it keeps the interface numbering consistent even when an intferace goes missing but, in my case, this feature was not desirable so I had to add a post action to the RHEL 6.2 kickstart to remove the udev rules. I have also noticed that since upgrading the host to RHEL 6.2 that my cloned machines no longer boot properly when the template machine is not built with a graphical console. This used to work fine
when the host OS was RHEL 5.6. Trying to debug this is a bit awkward because there is no console access nor is the networking coming up. I guess I could loop mount the disk image and see if there are any logs but for the moment I can put up with making the template node with a graphical install.
Footnotes:
- For those 1337 h4x0rz out there interested in cracking the root password in the linked kickstart file, here's a hint... try redhat to save yourself a bit of time.
- The course I attended wasn't actually on RHEL6 it was run on RHEL5.8, not that this really presented an issue as I just added the distro files in and setup a kickstarted template for that OS version. I was able to complete all the labs with this setup. The practice really helped in the exam, I was able to complete the exam in half the allotted time with a score of 100%.
No comments:
Post a Comment