Current news...


2014: Summer update

With the hot weather upon us, the undergraduates away for the summer and some of us lucky enough to be away on holiday, a quick look at what's in store for the rest of the summer ahead of the start of the autumn term:

  • Huxley 616 server room: power demands in the server room are increasing all the time so taking advantage of a scheduled power outage over the weekend of June 28/29th, another 8 kVA UPS has replaced an older 5 kVA unit. In addition, the power feeds to this room are being replaced with a dedicated 3-phase supply and additional 32 amp single phase outlets over the weekend of July 18th-21st. This work coincides with the electrical shut down in the whole of the Huxley building on Saturday, July 19th, which will minimise downtime and disruption to research work.

    On completion of the power supply upgrade, a new rack, additional temperature monitoring probes and power distribution units will be installed in the room before the end of July.

  • Maths compute cluster: following the successful migration of macomp000 and the production nodes macomp01 to macomp13 inclusive from Red Hat Linux to the latest Ubuntu Linux version 14.04, three more compute nodes macomp14 to macomp16 will be added once the additional power and rack space is ready towards the end of July. The original Red Hat portion of the compute cluster is still available on the 10 nodes mablad01 to mablad10 inclusive.

  • GPU compute facilities: another three Tesla K20 GPU cards have been ordered for the K20-based GPU cluster hosted on the nvidia2 server bringing the total of K20 GPU cards in this cluster to five.

    The K20 GPUs have 2496 cores and 5 GB of memory each (compared with 512 cores and 6 GB for the M2090 GPUs hosted on nvidia1) and will provide a massive boost to the cluster's processing resources. nvidia2 runs Ubuntu 13.04 with the nVidia CUDA Toolkit version 5.5 & the CULA Tools (http://www.culatools.com) sparse and dense libraries as well as the usual packages as installed on nvidia1.

  • Stats compute facilities: the Stats section now has a SAN (Storage Area Network) utilising 7 servers and providing a usable capacity of 60 Tb which along with existing large capacity servers already being used by this section brings their total storage capability up to almost 100 Tb. In addition, a new Hadoop cluster with 96 Tb of HDFS storage is due to be installed within the next few weeks. In the meantime fete been repaired and is back in service again.

  • Private server hosting: the number of servers and large workstations hosted on behalf of users has increased since the last update; if you are considering purchasing a system for compute work, please try to choose a rack-mounted system instead of a desktop (or deskside) workstation. Rackmounted servers take up less space, are designed for 24x7x365 operation and usually have remote management facilities with makes it possible resolve even quite major system problems remotely instead of having to come into college out of hours to deal with a problem. They are also cheaper than workstations - do you really need that Dolby 5.1 sound card in a compute system locked in a server room?

    Also, if you are bringing a rackmount server over from another institution, please try and remember to bring the rackmount rail kit as well! If this has been left behind or is missing, it's more difficult to install into a rack - rail kits are surprising expensive (well over £100 in some cases) but are usually bundled with new servers. In some cases we can get second-hand kits to replace missing ones but at other times we have to resort to shelves, etc which is less than ideal.

Older news items:

March 15th, 2014: Spring update
November 2nd, 2013: Summer update
May 24th, 2013: Spring update
January 23rd, 2013: Happy New Year!
November 22nd, 2012: No news is good news...
November 17th, 2011: A revamp for the Maths SSH gateways
September 7th, 2011: Failed systems under repair
August 14th, 2011: Introducing calculus, a new NFS home directory server for research users
July 19th, 2011: a new staging server for the compute cluster
July 19th, 2011: A new Matlab queue and improved queue documentation
June 30th, 2011: Updated laptop backup scripts
June 18th, 2011: More storage for the silo...
June 16th, 2011: Yet more storage for the SCAN...
June 10th, 2011: 3 new nodes added to the Maths compute cluster
May 21st, 2011: Announcing SCAN large storage and subversion (SVN) servers
May 26th, 2011: Reporting missing scratch disk on macomp01
May 21st, 2011: Announcing completion of silo upgrades
May 16th, 2011: Announcing upgrades for silo
April 14th, 2011: Goodbye SCAN 3, hello SCAN 4
March 26th, 2011: quickstart guide to using the Torque/Maui cluster job queueing system
March 9th, 2011: automatic laptop backup/sync service, new collaboration systems launched
May 20th, 2010: Scratch disks are now available on all macomp and mablad compute cluster systems
March 11th, 2010: Introduing job queueing on the Fünf Gruppe compute cluster
October 16th, 2008: Introduing the Fünf Gruppe compute cluster
June 18th, 2008: German City compute farm now expanded to 22 machines
February 7th, 2008: new applications on the Linux apps server, unclutter your desktop
November 13th, 2007: aragon and cathedral now general access computers, networked Linux Matlab installation upgraded to R2007a
September 14th, 2007: Problems with sending outgoing mail for UNIX & Linux users
July 23rd, 2007: SCAN available full-time over the summer vacation, closure of Imperial's Usenet news server
May 15th, 2007: Temporary SCAN suspension, closure of the Maths Physics computer room, new research computing facilities
January 14th, 2005: Exchange mail server upgrade, spam filtering with pine and various other enhancements


Andy Thomas

Research Computing Officer,
Department of Mathematics

last updated: 17.07.2014