The Maths SCAN

LATEST NEWS: New data storage facility added to the SCAN

A new server, bieberbach, has been added to the SCAN on May 28th to store the large amounts of experimental data that is often generated by the SCAN. Following the SCAN tradition of naming systems after rivers, Bieberbach is a small tributary that feeds the Rodau Creek in Hessen, Germany near where the main user of the SCAN grew up in. And rather fittingly, the mathematician Ludwig Bieberbach proposed the Bieberbach Conjecture which was proven in 1984, two years after his death.

bieberbach the server is an HP ProLiant Microserver fitted with 3 x 2 TB disks in addition to its system disk and runs OpenIndiana, the new fully open-sourced descendant of the former Sun OpenSolaris project which unfortunately was killed off by Oracle following its aggressive takeover of Sun Microsystems last year. The storage disks form a ZFS/RAIDz pool with a usable capacity of about 3.5 TB; RAIDz offers all the advantages of RAID 5 without its disadvantages - the well-known "write hole" and the "bit rot" phenomena common to many other RAID schemes. Data gathered during SCAN operation is initially stored on potomac4-wicomico and is then transferred to bieberbach to free up space for subsequent simulations.

Introducing SCAN 4

the new SCAN 4 servers

The new SCAN 4 servers

SCAN 4 was soft launched on Friday April 8th and replaces almost to the day the SCAN 3 we have been using for the past 2 years. Hosted at last on enterprise grade hardware accommodated in the ICT data centre, this brings a boost to the reliability of disk space for its users using a RAID 5 disk array along with many other enhancements on the new servers and for the first time, the SCAN is now based on a mirrored server pair to ensure the continued reliability and resilience in the long term.

Continuing the tradition of naming the SCAN servers after major rivers, the new servers add an extra dimension to this theme; potomac4 replaces potomac3 but as potomac4 is now a pair of servers, we have added the names of two tributaries of the River Potomac to distinguish them and they are known as potomac4-wapocomo and potomac4-wicomico. Both are HP DL360 g4 servers with the latest FreeBSD 8.2 installed on a pair of 73 GB disks in RAID1 while user's files are accommodated on a RAID 5 array comprising four 146 GB disks with a usable capacity of 420 GB.

The old SCAN 3 servers potomac3 and nolichucky will live on - they will move from their perch on top of the network rack in Huxley 215 and be used for developing the next generation of the SCAN.

What is the Maths SCAN?

The Maths SuperComputer At Night initiative harnesses the power of many individual PCs to form a supercomputer capable of carrying out very large computational tasks such as Monte Carlo simulation, climate change modelling, etc. If you can imagine a large computer containing not just one or two CPUs (Central Processing Unit, or processors) but 50, 100 or 200 of them plus a huge amount of memory, this is a very good approximation of the Maths supercomputer. Currently, all 36 of the HP dc7900 PCs in Huxley room 215 plus the 25 HP dc7800 PCs in 410 form part of this cluster outside of normal college hours - this gives us 36 x 64-bit quad-core CPUs and 25 x 64-bit dual-core CPUs with a total of 194 CPU cores and 316 gigabytes of memory. As these computers would otherwise be idle at night and at weekends and during college holidays, this gives us many megaflops of raw processing power for no real cost.

The technology behind the SCAN

When these computer rooms close at the end of the day, the PCs shut down and then reboot as diskless FreeBSD systems, loading and then booting FreeBSD over the network from a remote boot server. The hard disks in the PCs remain untouched and are not used in any way while the systems are running FreeBSD. There are a lot of advantages to setting up large scale computation facilities in this way; one is that it is very easy to control, maintain and upgrade since there is only one operating system image that is used by all of the clients. So instead of having to make changes to, upgrade or even rebuild each machine individually, this work is carried out on the image that will be served by the network boot server only and the next time the client nodes boot from this, they too will be running the new or changed image. It is of course possible to customise the image and the start-up scripts to some extent so that machines in one group - those in Huxley 215, say - load a different configuration at boot time, for example. And in the current SCAN 3 implementation, much of the booted PCs live filesystem is hosted on a disk on the boot server which makes it easy to make immediate 'hot' changes to the operating system that is running on all of the client PCs, tune the kernel while it is running, add new user accounts, etc - previously, a reboot would have been required to load a new operating system image.

But the real beauty of the system is the almost infinite scalability and ease with which more nodes can be added to the SCAN; anything from a single computer to many thousands can be added simply by enabling booting from the network and adding them to a particular machine group that will access the SCAN boot server. Currently the system operates with 79 nodes in Maths but as many as 160 nodes have been operational in the past, encompassing teaching clusters in the departments of Physics and Chemistry. Unfortunately, political issues and the fact that the system does not fit easily into the coolege scheme of things have largely limited its use to Mathematics although a few non-Maths users have used it.

Here is a diagramatic representation of the SCAN (please note: this diagram is out of date as it depicts SCAN 3, not the new SCAN 4).

diagram of the SCAN

How does it work?

All of these machines have Windows XP Professional installed on their local hard disks and operate as normal Windows PCs during the daytime, as required for departmental teaching purposes. At the end of the day when the room is closed to students, the machines shut down automatically and then boot FreeBSD UNIX from the network boot server, running UNIX entirely in RAM (memory) and leaving the machine's own hard disk untouched.

Each system is essentially an autonomous node but they are all networked together and can communicate with each other and with the controller. So each system could be thought of as a CPU with its own memory attached and is linked to other CPUs and memories in the SCAN via the network.

The user's compute job resides in the user's home directory on the nolichucky fileserver and the programs have usually been written in such a way that it knows how to divide up the tasks involved and distribute them to each PC in the SCAN for processing. Output from the computations is written to disk files, etc in the usual way in the user's home directory on nolichucky, not the PC's hard disk. There are various way in which this parallel processing can be implemented - one is to use the MPI (Message Passing Interface) protocol which is fully supported on the SCAN but some users have written their own low-level network stacks which offer higher performance as the code interacts directly with the network interface rather than through a multi-layer API (applications programming interface).

The computers in the SCAN do not have to be operated as a massively parallel cluster - they can can be used individually too. Some tasks may be difficult to code for parallel computation or in some cases it may simply not be worth the time and effort to make a program parallel-capable but a lot of data needs to be processed as quickly as possible; you can then run multiple instances of that program on some or all of the CPU cores in the SCAN to achieve this.

Early in the morning of the following day, after the SCAN has worked all night, the cluster shuts down automatically and reboots back into Windows, ready for the room's re-opening to students.

At what times is the SCAN operational?

As the SCAN is distributed over three different rooms with differing opening times, the number of CPUs available in the SCAN varies according to the day of the week and also, the time of day. From Monday to Thursday the SCAN is operational with 25 CPUs from 8 pm in 410 with another 36 CPUs in room 215 from 11pm with the full complement of 61 machines; both SCAN CPU groups then run until 5.50am the following morning. On Friday evenings, the SCAN groups in room 410 starts up at the same times as for the other weekdays and continueis to run over the weekend until 5.50am the following Monday. However, the PCs in room 215 continue to be available for student use throughout Saturday and Sunday between 5.50 am and 11 pm.

During the Easter, summer and Christmas vacations, rooms 410 and 215 are closed altogether from the end of term and the clusters there will be running full time as part of the SCAN over this period. We will get an awful lot of computing done! In addition, the 66 PCs in the Maths Learning Centre (room 414) will join the SCAN whenever this facility is closed - these machines have quad core hyperthreaded processors and 8 GB memeory each.

If rooms 215 and 410 are closed, are there any Windows PCs I can use?

The Maths Learning Centre (MLC, room 414) houses 66 PCs running Windows 7 which are available to all users during MLC opening hours. In addition, room 409 has a number of PC's running Windows 7 which are available to postgraduates and year 4 undergraduates of the Mathematics department. Finally, the undergraduate common room 212 is home to 8 new HP PC's running Windows 7. These systems are accessible to all Maths users at any time.

These computers should satisfy the requirement for undergraduate computing facilities during college vacations but do let me know if these reduced general computing facilities cause you any undue hardship or inconvenience.

How powerful is the SCAN?

Percolation code written by Dr Gunnar Pruessner, a researcher in Math Physics, and run on the Maths SCAN has broken several records that were previously set by a Cray MP3 supercomputer, completing simulations in a shorter time than this million+ dollar machine.

Why FreeBSD and not Linux?

We are often asked why the SCAN runs FreeBSD UNIX and not the more popular Linux so here's an explanation: when the project was first conceived in 2001, the only Linux distribution that had any support for diskless booting was SuSE Linux and early attempts to realise the SCAN were based on SuSE 7.1 using PCs fitted with 3Com's 3c590 and 3c905 network interface cards (NICs). These had 28-pin DIL chip sockets that allowed third-party boot ROMs or locally-programmed EPROMs to be fitted, making it possible to boot the system from a network boot server. However at about this time, PC technology was moving on and separate PCI and ISA bus NICs in desktop PCs were rapidly giving way to NICs embedded on the motherboard with the boot ROMs being replaced by various implementations of Intel's PXE (Preboot eXEcution) standard.

Support for these early on-board NICs and the PXE environment in Linux was lagging behind the new technology and we had a lot of problems getting PCs with embedded NICs to boot Linux from the boot server. But on the other hand FreeBSD supported both the on-board NICs and PXE literally 'out of the box'; historically, support for diskless booting has always been good in UNIX as many UNIX operating systems date from the days when hard disks were expensive items and it often made good financial sense to have a single file/boot server with one or more hard disks and then arrange to boot a large number of diskless workstations from this over the network. Linux on the other hand is a relative newcomer and arrived at a time when widespread adoption of IDE interface disks was driving down the cost of large hard disks so Linux has always been very much a disk-based system.

Since one of the two developers of the SCAN, Gunnar Pruessner, uses FreeBSD as his main desktop operating system and is very familiar with it, the decision was taken to switch to FreeBSD and almost overnight, a working and fully-functional SCAN was born. FreeBSD also has other advantages over Linux - the codebase is more mature, it is demonstrably more stable (over 50% of the web servers in Netcraft's top 20 uptime league tables run FreeBSD) and it is also considerably more secure than Linux. It is sometimes pointed out that the range of commercial software available for FreeBSD is small compared with Linux but most Linux software can be run on FreeBSD systems if the kernel is compiled with Linux ABI (Application Binary Interface) support.


the SCAN 3 servers, potomac3 and nolichucky

The SCAN 3 servers, potomac3 and nolichucky

potomac3 is a 64-bit AMD server and will be used to host a mini-SCAN for ongoing development work. nolichucky is a 32-bit Pentium 4 server crammed full of fast SCSI disks and will probably find a use elsewhere.

Future plans

There are a lot of Windows PCs sitting idle at night and at weekends in not just the Maths department but the college as a whole; the ongoing desktop PC renewal programme is putting increasingly more powerful computers onto people's desktops which are mostly very under-utilised. There is a vast pool of unused compute resources sitting idle, all of which could be put into use with with little or no effort, and with no changes made to the system's local hard disk installation. And above all, in these times of fiscal stringency, all of this costs nothing!

Older SCAN items:

April 29th, 2011: Announcing SCAN 4 and the Easter 2011 timetable
December 20th, 2010: Christmas 2010 timetable
April 3rd, 2009: the SCAN goes 64-bit!
July 2nd, 2007: announcing the summer vacation timetable
May 15th, 2007: Suspension of SCAN in Huxley 410/411 for 6 weeks owing to student projects
November 24th, 2006: operating system upgraded to FreeBSD 6.2
March 24th, 2006: operating system upgraded to FreeBSD 6.0
March 1st, 2004: SCAN news update
July 21st, 2003: Introducing the SCAN

Andy Thomas

Research Computing Officer
Department of Mathematics

last updated: 28.05.2011