Current news...


March 16th: Planned Bazooka Hadoop cluster upgrade, reorganisation of backup servers

Bazooka Hadoop cluster

The increasing popularity of the Big Data statistics courses hosted on the Stats section's Bazooka Hadoop cluster has been the catalyst for some further upgrading of this cluster, which was last updated in late 2016. An additional node optimised for teaching use, with 64 processor cores, 528 GB of memory and 8 TB of local disk storage is being added to the cluster; this node, called athena, will be a second 'head node' in parallel with the existing bazooka head node although it will normally be available to research users, during courses it will be dedicated exclusively to teaching use.

At the same time a major upgrade of the Mapr Hadoop ecosystem is planned for around Easter time, taking it from the current version 5.2 to 6.1 along with a Ubuntu operating system upgrade. The new athena node has been installed and user data stored in the existing HDFS distributed filesystem is currently being backed up to a non-HDFS server but there is a lot it and this will take a few days to complete. After the new cluster set-up has been trialled on the Churchill test cluster, it is planned to upgrade the Mortar test cluster as well to provide an alternative facility to the Bazooka cluster while it is being upgraded, which is likely to take several days during which time it will be unavailable for use.

Reorganisation of backup servers

Recent storage upgrades to various group and sectional compute servers - especially the Stats' modal and medial systems - has had the knock-on effect of requiring more back-up storage capacity so that we can continue to hold full backups of users' data. Reorganisation of the contents of the three on-site backup servers is now under way, with backups being moved about between these servers to make better use of the available space. Unfortunately, even with the dedicated internal inter-server networks within Huxley 616, the limiting factor currently is the gigabit (1000 Mbits/second) network speeds and this operation will take time to complete since there is a huge amount of data being moved about.

Older news items:

February 19th: ma-offsite2 now online
January 19th: Matlab 2018b upgrade ongoing
December 14th: Matlab upgrade to version R2018b started, Stats section compute & storage enhancements completed, silos3 and 4 introduced
September 18th: more local storage for Stats modal server and new PostgreSQL database server launched
August 29th: new 'du' command options, cluster R upgrade and ma-backup3
July 2nd: nvidia3 now has two GPU cards
May 15th: Early summer update
March 29th: Easter update
March 24th: spring update
March 10th: late winter update
December 15th: pre-Christmas update
November 22nd: late November update
October 8th: start of 2017/2018 academic year update
2017: Midsummer's Day update
June 16th, 2017: mid-June update
June 2nd, 2017: Early summer update
April 20th, 2017: Spring update 2
March 22nd, 2017: Early spring update
March 10th, 2017: Winter update 2
February 22nd, 2017: Winter update
November 2nd, 2016: Autumn update
October 21st, 2016: Late summer update 2
October 14th, 2016: Late summer update
February 19th, 2016: Winter update
December 11th, 2015: Autumn update
September 14th, 2015: Late summer update 2
May 2nd, 2015: Spring update 2
April 26th, 2015: Spring update
November 11th, 2014: Autumn update
September 17th, 2014: Summer update 2
July 17th, 2014: Summer update
March 15th, 2014: Spring update
November 2nd, 2013: Summer update
May 24th, 2013: Spring update
January 23rd, 2013: Happy New Year!
November 22nd, 2012: No news is good news...
November 17th, 2011: A revamp for the Maths SSH gateways
September 7th, 2011: Failed systems under repair
August 14th, 2011: Introducing calculus, a new NFS home directory server for research users
July 19th, 2011: a new staging server for the compute cluster
July 19th, 2011: A new Matlab queue and improved queue documentation
June 30th, 2011: Updated laptop backup scripts
June 18th, 2011: More storage for the silo...
June 16th, 2011: Yet more storage for the SCAN...
June 10th, 2011: 3 new nodes added to the Maths compute cluster
May 21st, 2011: Announcing SCAN large storage and subversion (SVN) servers
May 26th, 2011: Reporting missing scratch disk on macomp01
May 21st, 2011: Announcing completion of silo upgrades
May 16th, 2011: Announcing upgrades for silo
April 14th, 2011: Goodbye SCAN 3, hello SCAN 4
March 26th, 2011: quickstart guide to using the Torque/Maui cluster job queueing system
March 9th, 2011: automatic laptop backup/sync service, new collaboration systems launched
May 20th, 2010: Scratch disks are now available on all macomp and mablad compute cluster systems
March 11th, 2010: Introduing job queueing on the Fünf Gruppe compute cluster
October 16th, 2008: Introduing the Fünf Gruppe compute cluster
June 18th, 2008: German City compute farm now expanded to 22 machines
February 7th, 2008: new applications on the Linux apps server, unclutter your desktop
November 13th, 2007: aragon and cathedral now general access computers, networked Linux Matlab installation upgraded to R2007a
September 14th, 2007: Problems with sending outgoing mail for UNIX & Linux users
July 23rd, 2007: SCAN available full-time over the summer vacation, closure of Imperial's Usenet news server
May 15th, 2007: Temporary SCAN suspension, closure of the Maths Physics computer room, new research computing facilities
January 14th, 2005: Exchange mail server upgrade, spam filtering with pine and various other enhancements


Andy Thomas

Research Computing Manager,
Department of Mathematics

last updated: 16.3.2019