Training courses for research

Want to get started with our systems but lack the necessary knowledge? We regularly organize hands-on systems training courses online, at our offices in Utrecht and Amsterdam, or at your education or research institution. You can also include the training courses in the educational programme of your institution.

Oudere man met jongere man kijkend naar interactief scherm

Systems training

Learn how to work with our systems.

Supercomputing

Why:
When you have to run many computations or analyses that are too big to fit on your own system, clusters and supercomputers are there to provide you with the required amount of compute power. In our cluster computing courses you'll learn how to work with the national compute cluster Lisa and the national supercomputer Snellius. You will do hands-on exercises to learn how to use these systems effectively and how to execute your tasks in the minimum amount of time with minimal effort.

Who:
Anyone who would like to know how to execute really large compute tasks. Familiarity with the basics of the Unix shell is required, but an introduction to its use may also be included in the course syllabus on demand.

What:

  • Introduction to Cluster Computing – 4 hours
  • Introduction to Unix and Cluster Computing – 6 or 8 hours
  • Cluster Computing for Life Sciences – 16 hours
HPC Cloud

Why:
Do you want to create and manage your own working environment and be able to run high-performance applications on it? Then HPC Cloud is the right platform for you. In these specific training courses you will learn how to start running your own virtual machine and coordinate different environments using the web interface. You will be able to design your own infrastructure and start running your applications on the cloud. The presentation of the platform is combined with hands-on exercises, which will help you to get familiar with the environment.

Who:
Anyone who would like to start using a flexible and tailor-made environment to run high-performance applications. Some familiarity with Windows or Linux is required.

What:

  • Getting Acquainted with the HPC Cloud – 2 hours
  • HPC Cloud for research – 8 hours
Data management

Why:
The correct organisation and storage of large amounts of data is nowadays becoming a complicated task, because of the heterogeneity of the information and the requirements to share particular data among specific groups of people. Our data management courses are focused on providing you with the necessary knowledge to keep your data FAIR (findable, accessible, interoperable and reusable) using different services and tools, like iRODS and compute workflows.

Who:
Anyone interested in learning how to store data in an efficient and scalable way. No specific previous experience is required, but some familiarity with the Unix shell and possibly Python is useful.

What:

  • Introduction to Data Management and EUDAT services – 2 hours
  • Using Persistent Identifiers for Research Data Management – 2.5 hours
  • Data Management – 4 hours
  • iRODS Advanced Training – 4 hours
  • iRODS System Admin Training – 4 hours
  • Integrating Data Management into Compute Workflows – 8 hours

Technical skills

Parallel programming

Why:
The development of efficient code requires some effort to make it work efficiently on different platforms. Especially in the case of clusters, supercomputers and heterogeneous systems with GPUs. Our parallel programming courses provide the necessary information to understand the basics of code development for large systems and basic skills to start developing your own parallel code or to implement a parallel version of your existing code. The focus is on both the traditional parallel programming paradigms, like the use of the Message-Passing Interface (MPI) with C or Fortran, and modern GPU architectures using Python.

Who:
Anyone who would like to learn to develop parallel code. It is necessary to be familiar with at least one programming language and the use of the Unix shell.

What:

  • Basic Concepts of Parallel Programming – 2 hours
  • Parallel Programming with MPI – 8 or 16 hours
  • Parallel Programming with OpenMP – 8 hours
  • Parallel Programming with Python – 8 hours
  • GPU Programming with Python – 8 hours
Machine learning

Why:
Classification, clustering, feature description and many more complex tasks are currently being supported by different machine learning techniques. In our courses on these topics you will be able to get an overview of the most popular machine learning applications with different hands-on exercises. Moreover, you will become  familiar with the use of the most up-to-date software support and get hints on how to extract the best performance from machine learning tasks on a supercomputer.

Who:
Anyone interested in diving into the world of machine learning, either without any previous knowledge (introductory course) or already with some basic experience (high-performance course). Some familiarity with Python and the use of Jupyter notebooks is helpful.

What:

  • Hands-on Introduction to Machine Learning – 8 hours
  • High-Performance Machine Learning – 8 hours
Big data

Why:
The amount of output data produced by different applications around the world is increasing exponentially. Data-intensive computing has become difficult to manage by the simple use of high-performance computing support and database management systems. Our big data training supports you in managing very large amounts of data. The training will give you hands-on experience with the use of specific platforms that are able to handle the distribution of data and provide additional features to ensure the correct execution of data-intensive applications, like fault tolerance.

Who:
Anyone that is not afraid to start managing really huge amounts of application data. No specific previous knowledge is required.

What:

  • Getting Started with Apache Spark – 8 hours
Visualisation

Why:
Turning your 3D data into an awesome image or video for a publication or project proposal, or for facilitating an easier and accurate analysis, can really make a difference. Different types of data (such as geographical data or networks) require different visualisation methods, techniques and tools. Our visualisation courses offer you the necessary insight on how to display your research data in different formats, either on your own pc or on a remote system (e.g. a supercomputer that can provide compute power for very large data sets).

Who:
Anyone who wants to learn how to turn raw data into beautiful representations. Previous experience with the use of 3D data is helpful. Bringing your own data for the hands-on exercises is always welcome!

What:

  • Introduction to Local and Remote Visualisation – 4 hours
  • Visualisation with Blender – 8 hours
Software containers

Why:
Reproducibility and portability are very important elements of research, but the highly changeable environments in computer support make them difficult to obtain. Our training on the use of containers will teach you how to package your software environment in a portable way and be able to run the same application on many different and heterogeneous environments. We will follow a hands-on approach to guide you through the creation and execution of your own container.

Who:
Anyone interested in obtaining reproducible and portable executions in any system. Some familiarity with the use of the Unix shell and batch systems (cluster/supercomputer) is required.

What:

  • Using Singularity Application Containerization for Reproducible Scientific Computing – 4 hours

Regular training courses

SURF Research Bootcamp: learn how IT can boost your research

SURF organises Research Boot Camps once or twice a year, a hands-on IT training day for researchers and research supporters. Take a look at our agenda for more information.

Research data management support course

Looking for an introductory course to support researchers in storing, managing, archiving and sharing their research data? Enroll in the Essentials 4 Data Support course of Research Data Netherlands (RDNL). RDNL is an alliance between 4TU.Centre for Research Data, Data Archiving and Networked Services (DANS) and SURFsara, joining forces in the area of long-term data archiving.

UvA & VU HPC Courses

Every year we collaborate with the universities in Amsterdam by providing different workshops inside their regular training programs for stepping up to supercomputing. Researchers from outside UvA or VU can also participate. You will find more specific information on these regular courses on the web pages of:

Basic programming skills and best practices

Our collaboration with the Netherlands eScience Center for training activities includes workshops on best practices and basic programming skills in combination with the use of SURF services. Stay tuned for more information by checking our agenda!

European-level training activities

We provide different training activities in the framework of several European collaboration programs, such as PRACE, CompBioMed or ELIXIR. You will find all the information about these training activities in our agenda, and also more general information on different external websites:

All our training courses are generally given in English.

Dates

Would you like to stay informed about the dates of our training courses?

Join mailing list

Go to event calendar