= Overview = This page is a work in progress. Please be sure to read and be familiar with the [https://users.emulab.net/trac/emulab/wiki/InstallDocs Emulab documentation]. The testbed needs to know what switches are connected to it and what power ports they are plugged into. Right now, we insert these manually into the database. Testbed nodes need to be setup to boot from the network by default. This is done through the Preboot eXecution Environment, available for most network cards. For onboard network cards, it is typically enabled through the BIOS. * [http://en.wikipedia.org/wiki/Preboot_Execution_Environment Preboot Execution Environment on Wikipedia] * [http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&DwnldID=19186 Intel PXE firmware and utilities] = Adding information about your switch, power controller, and serial lines to the database = == Setting up your Switch == === Creating switch node and interface types === In order to add our switch to the database, we need to setup node and interface types for them. We provide some SQL to make this easy. The script switch-types-create defines the following: * Node types for hp5412 (which included hp5406), hp2810, and nortel5510 switch types. * Interfaces for generic inter-switch trunks (if your site has more than one switch) named "trunk_100MbE", "trunk_1GbE", and "trunk_10GbE". For other switch types, please refer to the [https://users.emulab.net/trac/emulab/wiki/install/switches-db.html Emulab Documentation]. These are now loaded by the boss-install script, but if you need to load them by hand: {{{ mysql tbdb < ~/testbed/sql/interface-types-create.sql mysql tbdb < ~/testbed/sql/switch-types-create.sql }}} === Adding your switch to DNS === Your switch needs a name that resolves. You must add it either to /etc/hosts or to the name server running on boss. To add it to the name server, add it into /etc/namedb/.internal.db.head. You then refresh the DNS server by running /usr/testbed/sbin/named_setup. For example, on mini-isi.deterlab.net we have our switch that we will name hp1 at 192.168.254.1 on the HARDWARE_CONTROL network. So we add it to DNS as follows: {{{ [jhickey@boss ~]$ sudo su - boss# echo "hp1 IN A 192.168.254.1" >> /etc/namedb/mini-isi.deterlab.net.internal.db.head boss# logout [jhickey@boss ~]$ /usr/testbed/sbin/named_setup [jhickey@boss ~]$ ping -c 1 hp1 PING hp1.mini-isi.deterlab.net (192.168.254.1): 56 data bytes 64 bytes from 192.168.254.1: icmp_seq=0 ttl=63 time=3.496 ms --- hp1.mini-isi.deterlab.net ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 3.496/3.496/3.496/0.000 ms [jhickey@boss ~]$ }}} === Adding in your switches to the database === Using the hostname you have given your switch, insert a line in the the nodes table in the database. If you are adding a dedicated control network switch, use '''ctrlswitch''' as the role. Otherwise, the role is '''testswitch'''. {{{ insert into nodes set node_id='hp1',phys_nodeid='hp1',type='hp2180',role='testswitch'; }}} === Switch Interconnects === If your installation has more than one switch, we need to tell the database about it so that vlan trunking can be enabled and so that it doesn't try to oversubscribe the link. We created interface types earlier in this document. Let's say we have hp1 and hp2 which are connected by a single 1GbE link. hp2 module A, port 5 is connected to hp1 module B, port 23. We need to add two lines to the interfaces table (note that current_speed is in Mbits now): {{{ insert into interfaces set node_id='hp1',card=2,port=23,mac='000000000000',iface='B/23',role='other', current_speed='1000',interface_type='trunk_1GbE',uuid=UUID(); insert into interfaces set node_id='hp2',card=1,port=5,mac='000000000000',iface='A/5',role='other', current_speed='1000',interface_type='trunk_1GbE',uuid=UUID(); }}} We also need to add an entry to the wires table for these two switches: {{{ insert into wires set node_id1='hp1',card1=2,port1=23, node_id2='hp2',card2=1,port2=5, type='Trunk'; }}} '''Note:''' For switches that are not modular, set card to 1. For Ethernet interfaces, card starts at (and typically is) 0. === Setting up Switch Stacks === The idea of switch stacks comes from sites that run separate control and experimental networks. In this scenario, it does not make sense to create experimental vlans on switches that function only as control network switches and vice versa. The typical DETER deployment scenario will be a switch that handles both. In this case, we add the same switch to the two different stacks ('Experiment' and 'Control' which we also have to setup in switch_stack_types). We also make sure that is_primary is set to 1 for the Experimental stack line and 0 for the Control stack line (I assume so that we only try creating vlans once per switch). First, create our switch_stack_types. For our mini-isi setup, we have the two types: {{{ insert into switch_stack_types (stack_id, stack_type, snmp_community, leader) values ('Control', 'generic', 'private', 'hp1'); insert into switch_stack_types (stack_id, stack_type, snmp_community, leader) values ('Experiment', 'generic', 'private', 'hp1'); }}} So in our mini-isi example, we need to add in two entries for our switch hp1: {{{ insert into switch_stacks (node_id,stack_id,is_primary) values ('hp1','Experiment',1), ('hp1','Control',0); }}} === Testing your switch stack === On boss, run '''wap snmpit -l -l''' to list all vlans. For example, your output should look something like this (right now there might be some MIB warnings): {{{ [jhickey@boss ~]$ wap snmpit -l -l VLAN Project/Experiment VName Members -------------------------------------------------------------------------------- CONTROLH CONTROLHW hp1.1/3 hp1.1/4 hp1.1/5 CONTROL CONTROL hp1.1/3 hp1.1/6 hp1.1/7 hp1.1/8 hp1.1/9 hp1.1/10 hp1.1/11 hp1.1/12 INTERNET INTERNET hp1.1/1 hp1.1/2 hp1.1/3 BOSS BOSS hp1.1/3 [jhickey@boss ~]$ }}} == Setting up your power controller == === Adding the power controller to DNS === This is pretty much the same as with the switch above. Technically, you can get away without this step since the IP address will also be in the database, but it is good housekeeping. For example, on mini-isi.deterlab.net we have our switch that we will name apc23 at 192.168.254.23 on the HARDWARE_CONTROL network. So we add it to DNS as follows: {{{ [jhickey@boss ~]$ sudo su - boss# echo "apc23 IN A 192.168.254.23" >> /etc/namedb/mini-isi.deterlab.net.internal.db.head boss# logout [jhickey@boss ~]$ /usr/testbed/sbin/named_setup [jhickey@boss ~]$ ping -c 1 apc23 PING apc23.mini-isi.deterlab.net (192.168.254.23): 56 data bytes 64 bytes from 192.168.254.23: icmp_seq=0 ttl=254 time=3.476 ms --- apc23.mini-isi.deterlab.net ping statistics --- 1 packets transmitted, 1 packets received, 0.0% packet loss round-trip min/avg/max/stddev = 3.476/3.476/3.476/0.000 ms }}} === Adding the power controller to the database === We assume you are using an APC 7902 networked power controller. The default node_type for this is 'APC'. Now add in a node entry for the power controller. For mini-isi, our power controller is named apc23: {{{ insert into nodes set node_id='apc23', phys_nodeid='apc23', type='APC', role='powerctrl'; }}} Now add a line to the interfaces table. For mini-isi, the power controller is at 192.168.254.23: {{{ insert into interfaces set node_id='apc23', IP='192.168.254.23', mask='255.255.255.0', interface_type='fxp', iface='eth0', role='other'; }}} Now, finally we need a wires table entry: {{{ insert into wires set node_id1='apc23', card1=0, port1=1, node_id2='hp1', card2=1, port2=4, type='Control'; }}} === Telling the database about the power controller ports === This is jumping the gun since we have not gotten to the point of adding PC type nodes to the testbed yet, but when we do, they will go in the '''outlets''' table. {{{ mysql> describe outlets; +------------+---------------------+------+-----+-------------------+-------+ | Field | Type | Null | Key | Default | Extra | +------------+---------------------+------+-----+-------------------+-------+ | node_id | varchar(32) | NO | PRI | | | | power_id | varchar(32) | NO | | | | | outlet | tinyint(1) unsigned | NO | | 0 | | | last_power | timestamp | NO | | CURRENT_TIMESTAMP | | +------------+---------------------+------+-----+-------------------+-------+ 4 rows in set (0.00 sec) }}} So to add, say pc001 which is on apc23 port 1, we would do: {{{ insert into outlets set node_id='pc001', power_id='apc23', outlet=1; }}} You can add in a dummy entry and test if you like. == Setting up your serial server == TBD. We hope to have a VMWare image with the necessary software for DigiEtherlite devices soon. = Setting up the PXE environment on boss = == Installing a PXE boot loader == When the PXE boot ROM is loaded during machine boot. The default bootloader for testbed nodes is /tftpboot/pxeboot.emu. There are four different versions of pxeboot.emu distributed with the tarball. * pxeboot.emu-null : Does not display pxeloader prompt * pxeboot.emu-sio : Displays pxeloader prompt via COM1 serial port * pxeboot.emu-sio2 : Displays pxeloader prompt via COM2 serial port * pxeboot.emu-vga : Displays pxeloader prompt via video out Pick a loader that best suits your installation and copy it: {{{ cp /tftpboot/pxeboot.emu- /tftpboot/pxeboot.emu }}} You can also look at modifying '''/usr/local/etc/dhcpd.conf.template'''. You will have to generate a new dhcpd.conf configuration using '''/usr/testbed/sbin/dhcpd_makeconf > /usr/local/etc/dhcpd.conf''' and restarting dhcpd. == Setting up the MFS for testbed nodes == These filesystems are PXE booted over the network via TFTP and allow us to perform various parts of node maintenance. There are three different MFS (memory file system) images that come with DETER/Emulab. They are: * The Admin MFS (/tftpboot/freebsd) * Primarily used to create new operating system images using imagezip and ssh * The New Node MFS (/tftpboot/freebsd.newnode) * This is the default image for nodes not explicitly listed in dhcpd.conf. * Has scripts to try to identify what type of node is being booted based on node_type variables. * Runs a process to enable auto-detection of which switch ports the node is wired into. * The Frisbee MFS (/tftpboot/frisbee) * This image is used when loading an operating system image onto The reason all these tasks are split up among multiple images is to keep the image size down since they are booted over the network. With faster networks, these images will likely be rolled into a single Linux based image in the future. You will have to install your root SSH keys from boss into each MFS and change the root password. This process has been automated by the script setup_mfs in testbed/install. = Testbed Nodes = == Network connections == Generally each node will have a control network interface and experimental interfaces. The control network interface should be on a switch port that is on the CONTROL (VLAN 2003) network. The experimental interfaces should be on ports that are enabled, but can be in a default VLAN for now. == BIOS settings for testbed nodes == The testbed nodes should be set to boot only off of the network. Disable hard drive boot to prevent failed PXE requests from falling through to booting whatever is on the disk.