Simulation and modelling
The models used by researchers are becoming bigger and bigger and increasingly complex, and consequently require more computer capacity. Cartesius can provide this capacity thanks to the large number of processors and their total speed. The system is ideal for large-scale experiments. Examples include simulations and modelling that not only demand a lot of computing power and memory but also a lot in terms of communication between the various processors. Another key feature of Cartesius is its fast internal network. This not only applies to the bandwidth but also to its latency, the time it takes to send a message from one node to another node.
A vast number of processors
The system runs on Linux and enables users to compute using a vast number of processors. Besides the Intel processors, Cartesius also uses GPGPUs (General Purpose Graphics Processing Units). These accelerators combine the compute power of the GPUs with that of the CPUs. Cartesius also has fat nodes, which have 32 cores and more memory (256 GB). Researchers have access to a comprehensive set of tools, compilers and libraries. A number of libraries and tools are available on Cartesius to facilitate experiments and research in the area of machine-learning, including neural networks. See userinfo.surfsara.nl for more information.
Data storage as required
As Dutch National Supercomputer user, over 200 gigabytes of disk space will be available to you for your own files (home file system). We perform daily backups of these files. You will also have the use of 8 terabytes of temporary storage capacity. No backups are performed of these files, which are deleted after 2 weeks. Supplementary arrangements can be made if you require additional storage capacity.
Support & consultancy
Our support service is available to all our users. We could help you run or optimise your own software or algorithm, for example. Our Dutch National Supercomputer team could install special software packages and help you parallelize your own software. Online manuals are available for you to consult for suggestions on how to achieve even better performance or other information. Or follow one of our training courses.
The Dutch National Supercomputer is used by researchers in such fields as astrophysics, theoretical chemistry, hydrodynamics, climate research, water management and product and process optimisation. Below are some examples of research conducted using the Dutch National Supercomputer:
- SURFstory on climate research: "We're all in the same boat"
- Through the outback with a lightning fast solar car
- Cool tools: understanding communication between cells
- The Ocean Cleanup: computer models in the fight against plastic soup
- Video Virtual Humans: High performance computing for personalised health
- TU Delft: How to purify industrial wastewater by means of bacteria
- Dacolt: Modeling combustion processes
- Computational astrophysics - Leiden University: Cartesius and the white dwarfs
- Leiden University: Methane dissociation as a source of hydrogen
- Binkz: Calculating snow transport and accumulation for more efficient roof construction
You can contact our helpdesk by telephone or e-mail, or in person if you prefer. Please send us your questions or problems via email@example.com or by telephone on +31-208001400. The helpdesk can be contacted during office hours (09:00-17:00).
If you would like specific advice, e.g. on how to optimise your code for better performance, please feel free to contact one of our advisers. Our consultancy service is available for advice on larger projects.
More information and contact
You can find more (technical) information regarding the use of the Dutch National Supercomputer on the pages with user information for Cartesius.
If you have further questions, please contact us at firstname.lastname@example.org.
If you use this service, you may also be interested in the following services:
Consultancy: independent advice
Our consultants support you from the first analysis of the problem to the final implementation. They provide independent advice on, among other things:
- Accessing the Grid
- submitting jobs
- how to ensure that Cartesius or the Lisa Computing Cluster deliver an even better performance
- methods for approaching your data
- design and optimisation of your own software
- the exact design of your data storage system
- how to organise your data infrastructure
- how to make optimal use of our calculation and storage facilities
- integrating your virtual infrastructure into your work processes
- optimisation of applications
- running your software in parallel for faster processing
Depending on the size and complexity of your question, you will receive a customised proposal. We offer you many options in the field of Big Data Services. This includes education and training, but also advice about architecture and the use of technology. For more information, please contact our consultancy service.
Long-term storage of research data
The Grid, the HPC Cloud and Data Ingest are all connected to the central archive of SURFsara. This archive offers you extensive options for storing your research data. In addition, you can also use the PID (Persistent Identifiers) service on data that is stored on SURFsara storage services, such as Data Archive. Do you want to store your data securely over long periods? Then make use of our Data Archive service.
Visualisation: immediately clarity of results
Do you work with calculations that produce large amounts of data? Then you should use our visualisation techniques and support. Visualisation helps you to better interpret the results of your calculations.
Application support: improve your performance
If you use Cartesius or Lisa, application support can be very useful for optimising and parallelising your software programs. By improving the performance, you can carry out more science with the same number of core hours. You can request application support using the Request form for application support.
|System type||Bullx system extended with one Bull sequana cell. Built by Atos/Bull.|
|Full system||47,776 cores + 132 GPUs: 1.843 Pflop/s (peak performance)|
|Thin nodes (Haswell)||25,920 cores: 1.078 Pflop/s|
|Thin nodes (Ivy Bridge)||12,960 cores: 249 Tflop/s|
|Thin nodes (Broadwell)||5,664 cores: 236 Tflop/s|
|GPGPU nodes (Ivy Bridge + K40m)||1,056 cores + 132 GPUs: 210 Tflop/s|
|Xeon Phi nodes (Knights Landing)||1,152 cores: 48 Tflop/s|
|Fat nodes (Sandy Bridge)||1,024 cores: 22 Tflop/s|
|Memory||130 TB memory (CPU + GPGPU + HBM)|
180 TB home file systems, 7.7 PB scratch and project