Buitenaanzicht van locatie Wijnhaven van de hogeschool Rotterdam

Photo: Rotterdam University of Applied Sciences

Case study

AI-hub

From pilots to practice: Rotterdam University of Applied Sciences is using AI-hub to build safe and reliable AI agents

How can you responsibly train AI agents to search scientific sources without providing incorrect answers? Rotterdam University of Applied Sciences is conducting research into this. Researchers are using an AI-hub to ensure that language models are deployed reliably and responsibly. This enhances safety, privacy, compliance and transparency in education and research.  

Key facts

Who: Rob van der Willigen
Role: Senior Lecturer at the Knowledge Centre for Care Innovation (KCZ) and Tech Lead of the DataLabs EAS, Healthcare and AI SusTech  
Organisation: Rotterdam University of Applied Sciences
Service under development: AI-hub
Challenge: Rotterdam University of Applied Sciences wants AI agents to search scientific sources without hallucinations, delivering demonstrably reliable results within secure, privacy- and compliance-proof frameworks.
Solution: Rotterdam University of Applied Sciences uses the AI-hub to select the most suitable language model and, through guardrails, secure tooling and governance, ensure the output is reliable, privacy-compliant and compliant with regulations.

What do you do when you genuinely don’t know the answer to a question? Do you admit it, or do you bluff your way out of the situation? Large Language Models (LLMs) have a strong tendency towards the latter. Rob van der Willigen, senior lecturer at the Knowledge Centre for Healthcare Innovation (KCZ) and Tech Lead of the DataLabs EAS, Healthcare and AI SusTech at Rotterdam University of Applied Sciences, says: “I recently read an article in Nature showing that when you ask AI models to search for scientific references, error rates of up to 80 per cent occur.”

Van der Willigen focuses on verifying the accuracy of AI agents that independently search scientific information systems such as Google Scholar, PubMed and other bibliographic databases.

Building secure LLM applications with an AI-hub  

Choosing the right AI model and implementing robust ‘guardrails’ (restrictions on the algorithm’s actions) are essential. For many educational institutions, the reliability of AI models is a pressing concern. SURF and Npuls are stepping in to help with the development of the AI-hub and EduGenAI.  

The AI-hub acts as a layer between an AI application and the underlying language model (LLM). This layer gives you the freedom to select the best model for your specific needs. Furthermore, the AI-hub guarantees that data processing is carried out reliably, securely, and in compliance with regulations. “The AI-hub offers the opportunity to deploy language models securely from a protected environment. You can link tools to it, enabling you to build a truly reliable AI agent,” explains Corno Vromans, Public Values Advisor at SURF, who is closely involved in the development of the AI-hub.  

Van der Willigen’s project is one of 85 pilot projects through which SURF, together with Npuls, is working hard on a version expected to be available to all members by the end of the year. The pilot phase is broad in scope and intended to gain practical experience. At the same time, technical expertise within institutions is sometimes required to integrate this properly. “We provide our users with a so-called API key, a key to access the AI-hub. And of course we provide support, for example through a comprehensive wiki.”

Rotterdam University of Applied Sciences can accelerate safely  

Van der Willigen is very enthusiastic. “Without SURF, we wouldn’t have been able to use such a wide range of models.” For example, he can now deploy a number of Chinese models that would have had to be hosted locally without the AI-hub. “We simply don’t have the infrastructure for that. The AI-hub has accelerated our development enormously.”

Van der Willigen was already involved in the preliminary phase of the AI-hub and participated in developing the legal frameworks needed to make the pilots possible. Among other things, he works on smaller, more effective and domain-specific models for the healthcare sector.

He illustrates just how essential accuracy is with an example from the drama series The Pitt. In one scene, Dr Al-Hashimi emphasises that generative AI in an electronic patient record is about 98 per cent accurate, but that, unfortunately, the medication has been entered incorrectly. “That is the problem with AI at the moment. 99 per cent seems to be fine, but it is precisely that 1 per cent of crucial information that you cannot rely on. You have such specific problems within every domain.”

SURF fosters an innovative environment

The models developed by Rotterdam University of Applied Sciences’ Data Lab are made available to third parties under open-source licences via GitHub repositories and wikis. “So anyone can replicate what we have already worked out.”

Van der Willigen sees a growing role for SURF in initiatives of this kind. He would like to see SURF literally build a vault housing servers accessible only to Dutch research groups, running models built solely on Dutch, or more broadly European, soil.

“The problem with AI at the moment: 99 per cent seems fine, but it’s precisely that 1 per cent of crucial information that you can’t rely on”

A technical counterforce  

“Europe’s Achilles’ heel is not so much our innovative strength compared to the rest of the world, because in many areas we are genuinely ahead. The problem is that the revenue model is working against us. Start-ups are bought out very early on. An initiative like this acts as a counterforce to that. With SURF and now within the Npuls programme, you can work on innovations over the long term without anyone breathing down your neck, asking whether money is being made from them yet.” The AI-hub and EduGenAI are fine examples of this. As an innovation, the AI-hub provides a technical foundation for applications such as research into AI agents and EduGenAI, which is also locally developed as an accessible and education-focused user interface.  

To make full use of the AI-hub, interested educational institutions will need to be patient for a little while longer. However, preparations are already possible, for example by attending a special meeting on 23 June at the SURF office in Utrecht, where participants will give presentations on their pilot projects. Register for the meeting via ai@surf.nl.

Text: Thijs Doorenbosch

More information about the AI-hub can be found on the wiki

Related topics: