Learn how to work with our systems.
When you have to run many calculations or analyses that are too big to fit on your own system, clusters and supercomputers are there to provide you with the required amount of compute power. In our cluster computing courses you'll learn how to work with the national compute cluster Lisa and the national supercomputer Cartesius. You will do hands-on exercises to learn how to use these systems effectively and how to execute your tasks in the minimum amount of time with minimal effort.
Anyone who would like to know how to execute really large compute tasks. Familiarity with the basics of the Unix shell is required, but an introduction to its use may also be included in the course syllabus on demand.
- Introduction to Cluster Computing – 4 hours
- Introduction to Unix and Cluster Computing – 6 or 8 hours
- Cluster Computing for Life Sciences – 16 hours
Do you want to create and manage your own working environment and be able to run high-performance applications on it? Then HPC Cloud is the right platform for you. In these specific training courses you will learn how to start running your own virtual machine and coordinate different environments using the web interface. You will be able to design your own infrastructure and start running your applications on the cloud. The presentation of the platform is combined with hands-on exercises, which will help you to get familiar with the environment.
Anyone who would like to start using a flexible and tailor-made environment to run high-performance applications. Some familiarity with Windows or Linux is required.
- Getting Acquainted with the HPC Cloud – 2 hours
- HPC Cloud for research – 8 hours
The correct organisation and storage of large amounts of data is nowadays becoming a complicated task, because of the heterogeneity of the information and the requirements to share particular data among specific groups of people. Our data management courses are focused on providing you with the necessary knowledge to keep your data FAIR (findable, accessible, interoperable and reusable) using different services and tools, like iRODS and compute workflows.
Anyone interested in learning how to store data in an efficient and scalable way. No specific previous experience is required, but some familiarity with the Unix shell and possibly Python is useful.
- Introduction to Data Management and EUDAT services – 2 hours
- Using Persistent Identifiers for Research Data Management – 2.5 hours
- Data Management – 4 hours
- iRODS Advanced Training – 4 hours
- iRODS System Admin Training – 4 hours
- Integrating Data Management into Compute Workflows – 8 hours
The development of efficient code requires some effort to make it work efficiently on different platforms. Especially in the case of clusters, supercomputers and heterogeneous systems with GPUs. Our parallel programming courses provide the necessary information to understand the basics of code development for large systems and basic skills to start developing your own parallel code or to implement a parallel version of your existing code. The focus is on both the traditional parallel programming paradigms, like the use of the Message-Passing Interface (MPI) with C or Fortran, and modern GPU architectures using Python.
Anyone who would like to learn to develop parallel code. It is necessary to be familiar with at least one programming language and the use of the Unix shell.
- Basic Concepts of Parallel Programming – 2 hours
- Parallel Programming with MPI – 8 or 16 hours
- Parallel Programming with OpenMP – 8 hours
- Parallel Programming with Python – 8 hours
- GPU Programming with Python – 8 hours
Classification, clustering, feature description and many more complex tasks are currently being supported by different machine learning techniques. In our courses on these topics you will be able to get an overview of the most popular machine learning applications with different hands-on exercises. Moreover, you will become familiar with the use of the most up-to-date software support and get hints on how to extract the best performance from machine learning tasks on a supercomputer.
Anyone interested in diving into the world of machine learning, either without any previous knowledge (introductory course) or already with some basic experience (high-performance course). Some familiarity with Python and the use of Jupyter notebooks is helpful.
- Hands-on Introduction to Machine Learning – 8 hours
- High-Performance Machine Learning – 8 hours
The amount of output data produced by different applications around the world is increasing exponentially. Data-intensive computing has become difficult to manage by the simple use of high-performance computing support and database management systems. Our big data training supports you in managing very large amounts of data. The training will give you hands-on experience with the use of specific platforms that are able to handle the distribution of data and provide additional features to ensure the correct execution of data-intensive applications, like fault tolerance.
Anyone that is not afraid to start managing really huge amounts of application data. No specific previous knowledge is required.
- Getting Started with Apache Spark – 8 hours
Turning your 3D data into an awesome image or video for a publication or project proposal, or for facilitating an easier and accurate analysis, can really make a difference. Different types of data (such as geographical data or networks) require different visualisation methods, techniques and tools. Our visualisation courses offer you the necessary insight on how to display your research data in different formats, either on your own pc or on a remote system (e.g. a supercomputer that can provide compute power for very large data sets).
Anyone who wants to learn how to turn raw data into beautiful representations. Previous experience with the use of 3D data is helpful. Bringing your own data for the hands-on exercises is always welcome!
- Introduction to Local and Remote Visualisation – 4 hours
- Visualisation with Blender – 8 hours
Reproducibility and portability are very important elements of research, but the highly changeable environments in computer support make them difficult to obtain. Our training on the use of containers will teach you how to package your software environment in a portable way and be able to run the same application on many different and heterogeneous environments. We will follow a hands-on approach to guide you through the creation and execution of your own container.
Anyone interested in obtaining reproducible and portable executions in any system. Some familiarity with the use of the Unix shell and batch systems (cluster/supercomputer) is required.
- Using Singularity Application Containerization for Reproducible Scientific Computing – 4 hours
Regular training courses
SURF organises Research Boot Camps once or twice a year, a hands-on ICT training day for researchers and research supporters. Take a look at our agenda for more information.
Looking for an introductory course to support researchers in storing, managing, archiving and sharing their research data? Enroll in the Essentials 4 Data Support course of Research Data Netherlands (RDNL). RDNL is an alliance between 4TU.Centre for Research Data, Data Archiving and Networked Services (DANS) and SURFsara, joining forces in the area of long-term data archiving.
Every year we collaborate with the universities in Amsterdam by providing different workshops inside their regular training programs for stepping up to supercomputing. Researchers from outside UvA or VU can also participate. You will find more specific information on these regular courses on the web pages of:
We provide different training activities in the framework of several European collaboration programs, such as PRACE, CompBioMed or ELIXIR. You will find all the information about these training activities in our agenda, and also more general information on different external websites: