HPC Cloud: your flexible compute infrastructure
HPC Cloud gives you and your project team complete control over your computing infrastructure. The infrastructure ranges from a single work station to a complete cluster and can be expanded to suit your needs. You can use your own operating system and analysis software. HPC Cloud is housed in SURF's own data centre.
Flexible environment
HPC Cloud enables you, as a user, to control the computing environment you are working in. This applies to both the operating system and the analysis software. This means that you can adapt the environment to suit the software you are using for your research, such as a relational database, a remote desktop or an application server. This is a major advantage over most other infrastructures, which often impose restrictions on operating systems and software.
From work station to cluster
HPC Cloud can be accessed quickly. With just a simple request, you will be given direct access to computing capacity. As a researcher you can work with a single, heavy-duty work station, which simplifies management. Alternately, you can set up a complete compute cluster enabling you to expand the available computing capacity quickly and easily to accommodate peaks in the demand for computing capacity.
Fast processing and storage
HPC Cloud provides you access to multi-core processors for multi-threaded applications and a fast network for distributed memory applications. Fast storage facilities are available for data-intensive applications while bulk storage facilities can be used for very large data sets.
Secure collaboration
You fully control your own environment. You can therefore opt to offer it to your team as a collaborative platform for sharing data. Each team has its own network, you decide who has access. This means that no unauthorised persons will have access to your system or network, enabling you to share data safely with your fellow team members - and no one else.
In recent years researchers have discovered that the HPC Cloud service enables them to flexibly expand their own computing facilities. Examples of their projects are:
- Immuno-wars: attack of the clones
- Academic piracy - big data in legal science
- On top of the news with HPC Cloud
- Boosting ultra-sensitive microscopes
- Solving the puzzle of plant genomes (in Dutch)
- 'Cloudbursting' expands research infrastructure at VUmc
- Water research in the HPC Cloud (in Dutch)
- Meertens Institute: 2 billion tweets as research material (in Dutch)
- Geography (University of Amsterdam): Georeferencing - providing access to old maps through digitization (in Dutch)
Storage and back-up
Data are stored multiple times to minimise the risk of loss. To archive your data you can use the SURF Data Archive service. HPC Cloud features a fast network connection to this archive. SURF does not make any automatic backups of data located in HPC Cloud. You are personally responsible for making backups of files.
Support and consultancy
As a user of HPC Cloud you can always approach us for assistance. You can consult our standard documentation to set up your own infrastructure. We also offer a training course where you learn how to run your own virtual machine.
Our helpdesk can be reached by telephone or e-mail, and can also be approached personally. If you have any questions or problems, please email helpdesk@surfsara.nl or call +31-20-8001400. The helpdesk is available during office hours (9:00-17:00).
Looking for specific advice about how you can best deploy HPC Cloud computing capacity for your research? Contact our consultancy service.
More information and contact
More (technical) information on the HPC Cloud service can be found on the pages with user information about HPC Cloud. Interested in learning more? Contact us via info@surfsara.nl.
If you use this service, you may also be interested in the following services:
Consultancy: independent advice
Our consultants support you from the first analysis of the problem to the final implementation. They provide independent advice on, among other things:
- Accessing the Grid
- submitting jobs
- how to ensure that Cartesius or the Lisa Computing Cluster deliver an even better performance
- methods for approaching your data
- design and optimisation of your own software
- the exact design of your data storage system
- how to organise your data infrastructure
- how to make optimal use of our calculation and storage facilities
- integrating your virtual infrastructure into your work processes
- optimisation of applications
- running your software in parallel for faster processing
Depending on the size and complexity of your question, you will receive a customised proposal. We offer you many options in the field of Big Data Services. This includes education and training, but also advice about architecture and the use of technology. For more information, please contact our consultancy service.
Long-term storage of research data
The Grid, the HPC Cloud and Data Ingest are all connected to the central archive of SURF. This archive offers you extensive options for storing your research data. In addition, you can also use the PID (Persistent Identifiers) service on data that is stored on SURFsara storage services, such as Data Archive. Do you want to store your data securely over long periods? Then make use of our Data Archive service.
Visualisation: immediately clarity of results
Do you work with calculations that produce large amounts of data? Then you should use our visualisation techniques and support. Visualisation helps you to better interpret the results of your calculations.
Send data quickly with SURFlightpaths
Do you want a fast and reliable connection from your own network to our systems? With your own lightpath, you can very quickly send data to and from the Grid or the HPC Cloud. A light path is a direct connection that is shielded from the Internet. It is extra secure, reliable and suitable, for example, for privacy-sensitive information. The biggest challenge with these light paths is to connect them to the systems on both sides. We will help you by bridging the final metres between the end point of a light path and your data sources.
This page provides the technical specifications of HPC Cloud (December 2018)
Virtual machines |
up to 80 cores with 480 GB RAM |
Hardware |
30 nodes with 32 to 40 Intel processors at 256 GB to 480 GB RAM |
Shared storage |
x 3 redundancy to prevent data loss caused by hardware failure, on 10 Gbit network per storage node |
Please visit our user documentation pages for more (technical) information.
If you have any questions about the technical infrastructure, contact helpdesk@surfsara.nl.