-
Lustre
-
Lustre is a parallel distributed file system used in large cluster computing environments. The name Lustre is derived from Linux and cluster.
- Contact Person:
Lustre file system
The Lustre parallel file system is characterized by very high throughput performance and very good scalability. It has been used productively by the SCC on various HPC systems since the beginning of 2005. The HPC systems operated by the SCC and using Lustre are currently the research high performance computers (ForHLR I and ForHLR II) and the state computer bwUniCluster. Currently, there are a total of 10 Lustre file systems on these HPC systems with a storage capacity of 9573 TiB, 61 servers, and 2600 clients.
Further information can be found in the user guides for the various HPC systems.
Lustre article
A very good introduction to Lustre in German language was created at KIT as part of a student research project. It can be found here.
More general information about Lustre can also be found in this article, which was also published in RZ-News in September 2005.
Lustre lectures
Information about experiences with the administration as well as the usage of Lustre can be found in the following presentations:
- Introductory event at the data center (2005): Filesystems on SSCK's HP XC6000(14 pages pdf, 210 KB)
- HP-CAST 4 in Krakow (10.5.2005): Experiences & Performance of SFS/Lustre Cluster File System in Production(17 pages pdf, 150 KB)
- ISC 2005 in Heidelberg (24.6.2005): Experiences with HP SFS / Lustre in HPC Production(16 pages pdf, 190 KB)
- HP-CAST 5 in Seattle (11.11.2005): Experiences with 10 Months HP SFS / Lustre in HPC Production(16 pages pdf, 140 KB)
- HP-CCN in Seattle (12.11.2005): Performance Monitoring in an HP SFS Environment(12 pages pdf, 220 KB)
- SGPFS 5 in Stuttgart (4.4.2006):Experiences with HP SFS / Lustre at SSCK(20 pages pdf, 210 KB)
- ISC2006 in Dresden (27.6.2006): Parallel File Systems and Impact of Blade Systems(15 pages pdf, 290 KB)
- HP-CCN in Tampa (11.11.2006): Latest Production Experiences with HP SFS(15 pages pdf, 176 KB)
- RZ Chalk and Talk in Karlsruhe (30.11.2006): Parallel File Systems Compared(21 pages pdf, 230 KB)
- xc2 Introduction Workshop in Karlsruhe (12.1.2007): The Parallel File System HP SFS/Lustre on xc2(19 pages pdf, 210 KB)
- HP-CAST 8 in Karlsruhe (8.5.2007): SFS Troubleshooting and Monitoring Tools(7 pages pdf, 60 KB)
- HP-CCN in Karlsruhe (9.5.2007): Latest Experiences with HP SFS(12 pages pdf, 200 KB)
- ISC2007 in Dresden (27.6.2007): Experiences with SFS/Lustre in Multi-user Production Environments(12 pages pdf, 180 KB)
- HP-CAST 12 in Madrid (12.5.2009): Experiences with different HP SFS/Lustre systems at KIT/SCC(19 pages pdf, 631 KB)
- SIMLAB@KIT in Karlsruhe (29.11.2010): Using file systems at HC3(13 pages pdf, 1040 KB)
- bwGRiD Face-to-face meeting in Karlsruhe (1.3.2011): Assistance in Lustre administration (15 pages pdf, 1200 KB)
- European Lustre Workshop 2011 in Paris (26.9.2011): Lessons learned from Lustre file system operation (17 pages pdf, 1500 KB)
- SCC seminar in Karlsruhe (5.10.2011): Lessons learned from parallel file system operation (24 pages pdf, 2100 KB)
- OpenSFS booth at SC'13 (18.11.2013):Lustre at KIT (3 pages pdf, 98 KB)
- Internal SCC Storage Workshop at KIT (16.4.2014): The parallel file system Lustre (10 pages pdf, 234 KB)
- Lustre Administrators and Developers Workshop 2014 at Reims (22.9.2014): Challenges in making Lustre systems reliable (15 pages pdf, 399 KB)
- Lustre Administrators and Developers Workshop 2015 in Paris (22.9.2015): Lustre tools for ldiskfs investigation and lightweight I/O statistics (14 pages pdf, 578 KB)
- Lustre Administrators and Developers Workshop 2016 in Paris (21.9.2016): Performance of and experiences with Lustre over a long distance InfiniBand connection(24 pages pdf, 809 KB)
- HPC I/O in the Data Center Workshop (HPC-IODC) at ISC 2017 in Frankfurt (22.6.2017): Extraordinary HPC file system solutions at KIT (20 pages pdf, 667 KB)