Changes between Version 4 and Version 5 of Installation/Hardware
- Timestamp:
- Oct 29, 2012 6:42:02 PM (12 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
Installation/Hardware
v4 v5 5 5 === Network Switches === 6 6 7 There are two networks in a typical DETER/Emulab install. The control network is where testbed nodes network boot and mount filesystems over. The experimental network is where experimental topologies are instantiated. Right now, we recommend HP 5400zl series switches. The number of ports depends on how many nodes you want to support. At DETER, we typically have at least five interfaces per testbed node. 7 The DETER software stack dynamically sets up real VLANs on switches. There are also two networks in a typical DETER/Emulab install. The control network is where testbed nodes network boot and mount filesystems over. The experimental network is where experimental topologies are instantiated. These networks may be separated or coexists on the same switch. It depends on how big your testbed is and if you mind control/fileserver traffic going over the same switch trunks with experimental data. For small installs, a single switch is fine. 8 9 Although you may be tempted to borrow an engineering sample PacketMasher 9000 from that cool startup your friend works at in Silicon Valley, it is important that you choose a switch that has software support. Adding support for new switches is possible, but non-trivial. Right now, we recommend HP 5400zl series switches. The number of ports depends on how many nodes you want to support. At DETER, we typically have at least five interfaces per testbed node. 8 10 9 11 [http://h10144.www1.hp.com/products/switches/HP_ProCurve_Switch_5400zl_Series/overview.htm HP 5400zl Series overview] 10 12 11 For small installations, we have had good luck with HP 2810 switches.13 For really small installations, we have had good luck with HP 2810 switches: 12 14 13 15 [http://h10146.www1.hp.com/products/switches/HP_ProCurve_Switch_2810_Series/overview.htm HP 2810 overview] … … 15 17 === Power and Serial Controllers === 16 18 17 Historically, we have used real power controllers and serial concentrators rather than IPMI. If your site is on a limited budget, we expect that IPMI could work just as well as these items. We, however, are not using them at DETER yet.19 Historically, we have used real power controllers and serial concentrators rather than IPMI. We are also currently using IPMI, so you can skip on the power and serial controllers if you want. 18 20 19 21 For power controllers, we are using APC 7902 rack PDUs. … … 27 29 === Infrastructure Machines === 28 30 29 There are three main machines in a DETER/Emulab installation. These are boss, users, and router. Boss hosts the database, web interface, and main logic. Users acts as the NFS/SMB server and user login machine. All these machines run FreeBSD 7.4.31 There are three main machines in a DETER/Emulab installation. These are boss, users, and router. Boss hosts the database, web interface, and main logic. Users acts as the NFS/SMB server and user login machine. All these machines run FreeBSD 9.1. 30 32 31 33 These machines do not need to be very high powered if your budget is limited. We have successfully deployed all three machines on a single PowerEdge 860 with 4GB of ram running VMWare ESXi. How much of a box your provision is really up to what your site requirements are. … … 33 35 === Testbed nodes === 34 36 37 The Emulab software stack that powers DETER predates the 'Cloud' buzzword by about a decade. Although we support a virtualization overlay through [http://containers.deterlab.net DETER Containers], we are really about handing researchers physical machines and physical networks. At DETER we tend to purchase low end, single CPU server class machines. Things like advanced remote management, redundant power supplies, and over engineered hardware are best left to the core servers described above. If you have a choice between doing 16 entry level machines or 4 over powered nodes, we currently recommend going for quantity. 38 35 39 What you provision here is really up to you. All of the machines we are using at DETER are no longer in production, so we can not recommend specific models. 36 40 * The machines must be able to PXE boot. 37 * The machines need to be capable of running FreeBSD 7.4. 38 * You should ideally have about 5 network interfaces on each node. One interface for the control network and four for the experimental network. 41 * The machines need to be capable of running FreeBSD 8.3. Watch out for fancy RAID controllers or strange network cards. Standard SATA drives and Intel NICs are what we use. 42 * You should ideally have about 5 network interfaces on each node. One interface for the control network and four for the experimental network. 43 44 For our most recent buildout, we used [http://www.supermicro.com/products/system/3U/5037/SYS-5037MC-H8TRF.cfm SuperMicro MicroCloud] machines. We get 8 Xeon E3 servers in 3u of rack space and control them via IPMI. 39 45 40 46 Aside from these basic requirements, the testbed nodes depend on what you intend to do. 41 47 42 Our standard operating system images include Ubuntu 1 0.04, CentOS 5, and FreeBSD 8.48 Our standard operating system images include Ubuntu 12.04, CentOS 6, and FreeBSD 9.