System Changes and News
This page is a log/list of major changes to systems. This page will be updated frequently.
Future/Current
Grid
DPM based storage is offline. The new Grid-XGW system is still undergoing tests. Please Contact us for help.
IPPP & Fielding & CfAI
Nothing yet.
IPPP Only
There will be downtime in 2024 to allow for a system replacement for the IPPP home storage, this will bring speed improvements (in both storage and networking) along with a slight capacity bump, as well as ensure we have continued reliability.
Fielding Only
Nothing yet.
CfAI Only
There may be brief disruption in the future (notice will be given) for some outstanding power infrastructure works.
Past
2024
August
- Remaining GPUs migrated to Rocky 9
- Remaining WS migrated to Rocky 9
- CfAI-WS10 brought into test.
- CfAI Storage migrated to permanent system.
July
- GPU10, GPU11, GPU12, GPU13 have been reduced to a single GPU system called GPU10.
- Deployed Environment Modules for gcc-toolset packages.
- Initial deployment of CfAI home storage is online.
June
- WS1 and WS2 are now Rocky Linux 9 (EL9).
- Login is now Rocky Linux 9 (EL9).
- CPU Batch Queue is no Rocky Linux 9 (EL9).
- GCC12 and GCC 13 deployed on Rocky 9 WS/Login/CPU Systems. – See usage guide here.
- CE3 and CE4 had major configuration changes with brief downtime to resolve a ~2 year old bug.
- CfAI-WS3 brought online.
- CfAI had brief outage while power infrastructure works were undertaken.
May
- WS downtime on the 17th and 28th for power infrastructure works.
- 28th – All WS redundant networking was repaired.
- 23rd – Login2 is now Rocky Linux 9 (EL9).
- 23rd – WS5 is now back to staff only and Rocky Linux 9 (EL9).
- 22nd – WS3 and WS4 are now Rocky Linux 9 (EL9).
- 14th – WS6 and WS7 are now Rocky Linux 9 (EL9).
- 2nd – Fielding ‘Long’ Queue is Rocky Linux 9 (EL9) nodes only.
- 2nd – CE3 is Rocky Linux 9 (EL9) nodes only.
- 8th – CE2 (Fielding Default) is Rocky Linux 9 (EL9) nodes only.
- GridUI1s now Rocky Linux 9 (EL9)
April
- Downtime for power infrastructure work within Physics
- Downtime for some nodes to enable upgrades to Rocky Linux 9 (EL9)
- 25th – GridUI3 was upgraded to Rocky Linux 9 (EL9).
- 24th – CE4 Queue is Rocky Linux 9 (EL9) nodes only.
- 29th – GridUI2 is now Rocky Linux 9 (EL9).
- 29th – CE1 Queue is Rocky Linux 9 (EL9) nodes only.)
March
- Downtime for power infrastructure work within Physics.
February
- Downtime of Grid test queue to enable upgrades to Rocky Linux 9
- Downtime of vn0 to upgrade to Rocky Linux 9
January
- The IPPP GPU 2,3 and 4 systems have had their memory upgraded to double the previous capacity.
- Brief downtime of CE-TEST, CE1, CE2, CE3 and CE4 to allow for software upgrades.
- WS5 was upgraded to Rocky 9 as a test deployment.
2023
December
- The new XGW storage system is currently under going testing after reports of missing data.
November
- 25 – 27th – Complete downtime for the Grid and some IPPP Systems.
- SE01 (gsiftp, xrootd, webdavs) was retired, please use the new XGW system – See Grid Storage.
October
- 4 new grid nodes were brought online, giving an additional 700+ cores.
August
- Complete downtime for Grid and various IPPP resources.
June
- Gitlab was offline for 48hours during a system migration.
- IPPP monitor deployment was 98% completed.
May
- IPPP OTP was enabled for the IPPP Website, Seafile and GitLab.
- IPPP Printing now utilises the CIS papercut print queues.
April
- 15 new grid nodes were brought online, replacing 2300 cores and gaining an additional 500 cores.
March
- WS 3,4,6,12 were replaced.
- CfAI WS Systems were brought online.
Feb
- CPU 1,2,3,4,5 were replaced with newer systems (Approx 500 cores to 1200 with a better power efficiency per core – 3.5W down to 1.9W.)
- CPU 6+7 were retired.
- WS 1,2,5 were replaced
- WS 7,13,14,15,16 were brought online.
- Rollout started for the replacement of all IPPP monitors.
- Grid Storage was reduced to enable migration to a new system ready for DPM retirement.
Jan
- GridUI1 failures were investigated and resolved.
- GridUI1 was replaced with an upgraded system.
- Scratch SSD was transferred from GridUI2 to Gridui2
2022
- Upgraded the IPPP networking from 10Gb/s to a redundant 80Gb/s
- Upgraded the Grid Networking from 10Gb/s to a redundant 40Gb/s