March 10th: late winter update
I was hoping to kick this update off with an upbeat "here comes spring" message but we're pretty much still in wintry times with the UK's worst winter weather for years only last week and a rather turbulent term so far.
Four more nodes from the legacy cluster have been taken out of service, upgraded and now await moving across campus to join the NextGen cluster, leaving just 4 compute nodes remaining while existing user jobs complete. The Nextgen cluster is running well and as spring approaches, usage is expected to increase.
Several Dell PowerEdge R815 servers have been purchased, mainly for use by research groups but one will be used to reinstate a general purpose non-clustered compute facility for all users in Maths.
Nine years ago, before we went down the managed cluster computing route, all Linux compute servers in Maths were essentially stand-alone systems that anyone could use interactively, logging in via ssh, running programs from the command line and using X forwarding over ssh to run GUI programs such as xmaple and Matlab. The move to batch-mode cluster computing has removed this facility which has disadvantaged users who don't want to learn how to use the cluster and short term visitors alike, so this new addition will restore the ad hoc compute facilities we used to have. With 4 CPUs, 64 processor cores and 512 GB of memory, there should be enough compute resources for all.
However, a downside of unmanaged ad hoc compute faciities is they can become very much a free for all "dog eat dog" environment, with users unaware of finite system resources and greedy or impatient users trying to run far more jobs on a system than it is capable of. This leads to upset users, lost work and crashed systems, something we left behind with the switch to managed cluster computing. So although this new facility will be monitored, problems arising from user abuse may not receive high priority attention; users who abuse this system or effectively prevent others from using it will have their login sessions and running jobs terminated with no prior notice being given. Please use the compute cluster wherever possible for your work as this is a more stable managed environment.
A dedicated MySQL server is also being introduced to meet the demand for a higher throughput server for projects co-hosted between Maths and other departments. This will replace the MySQL service provided by macomp00 which was originally added as an afterthought to this server to meet a project requirement from Bioinformatics that has since grown far beyond the original expectations.
Older news items:
- December 15th: pre-Christmas update
- November 22nd: late November update
- October 8th: start of 2017/2018 academic year update
- 2017: Midsummer's Day update
- June 16th, 2017: mid-June update
- June 2nd, 2017: Early summer update
- April 20th, 2017: Spring update 2
- March 22nd, 2017: Early spring update
- March 10th, 2017: Winter update 2
- February 22nd, 2017: Winter update
- November 2nd, 2016: Autumn update
- October 21st, 2016: Late summer update 2
- October 14th, 2016: Late summer update
- February 19th, 2016: Winter update
- December 11th, 2015: Autumn update
- September 14th, 2015: Late summer update 2
- May 2nd, 2015: Spring update 2
- April 26th, 2015: Spring update
- November 11th, 2014: Autumn update
- September 17th, 2014: Summer update 2
- July 17th, 2014: Summer update
- March 15th, 2014: Spring update
- November 2nd, 2013: Summer update
- May 24th, 2013: Spring update
- January 23rd, 2013: Happy New Year!
- November 22nd, 2012: No news is good news...
- November 17th, 2011: A revamp for the Maths SSH gateways
- September 7th, 2011: Failed systems under repair
- August 14th, 2011: Introducing calculus, a new NFS home directory server for research users
- July 19th, 2011: a new staging server for the compute cluster
- July 19th, 2011: A new Matlab queue and improved queue documentation
- June 30th, 2011: Updated laptop backup scripts
- June 18th, 2011: More storage for the silo...
- June 16th, 2011: Yet more storage for the SCAN...
- June 10th, 2011: 3 new nodes added to the Maths compute cluster
- May 21st, 2011: Announcing SCAN large storage and subversion (SVN) servers
- May 26th, 2011: Reporting missing scratch disk on macomp01
- May 21st, 2011: Announcing completion of silo upgrades
- May 16th, 2011: Announcing upgrades for silo
- April 14th, 2011: Goodbye SCAN 3, hello SCAN 4
- March 26th, 2011: quickstart guide to using the Torque/Maui cluster job queueing system
- March 9th, 2011: automatic laptop backup/sync service, new collaboration systems launched
- May 20th, 2010: Scratch disks are now available on all macomp and mablad compute cluster systems
- March 11th, 2010: Introduing job queueing on the Fünf Gruppe compute cluster
- October 16th, 2008: Introduing the Fünf Gruppe compute cluster
- June 18th, 2008: German City compute farm now expanded to 22 machines
- February 7th, 2008: new applications on the Linux apps server, unclutter your desktop
- November 13th, 2007: aragon and cathedral now general access computers, networked Linux Matlab installation upgraded to R2007a
- September 14th, 2007: Problems with sending outgoing mail for UNIX & Linux users
- July 23rd, 2007: SCAN available full-time over the summer vacation, closure of Imperial's Usenet news server
- May 15th, 2007: Temporary SCAN suspension, closure of the Maths Physics computer room, new research computing facilities
- January 14th, 2005: Exchange mail server upgrade, spam filtering with pine and various other enhancements
Research Computing Manager,
Department of Mathematics
last updated: 10.03.2017