How to Run Open-Source LLMs Locally with the OpenAI Connector and Ollama 

In this blogpost, you will learn how to invoke LLMs running locally on your machine from a Mendix app. For that, we’ll leverage the OpenAI connector from the Mendix marketplace to introduce smart GenAI features to your use case. Since many GenAI providers offer an API that is compatible with that of OpenAI, a similar approach, as described below, could also be used to integrate these providers into Mendix.

What is Ollama?

Ollama is a framework that lets you run open-source large language models (LLMs) like DeepSeek-R1, Llama 3.3, Phi-4, Mistral, Gemma 2, and other models, on your local machine. Running LLMs locally offers enhanced privacy, control, and performance by keeping data on the user’s environment and reducing latency. It also provides network independence and may improve reliability and compliance with regulatory requirements.

Prerequisites

Mendix Studio Pro 9.24.2 or higher.

1 – Download and install Ollama

Download and install Ollama.

Note for Mac users: If you are running Mendix Studio Pro on a Mac with Parallels, Mendix recommends installing Ollama on Windows, so that you don’t need to setup port forwarding.

2 – Download your first model

Check out the Ollama model library and download one of their models by opening a terminal and entering ollama pull model-id. Replace the model-id token with the model you would like to use from the model library. For this tutorial, we used DeepSeek-R1 and executed ollama pull deepseek-r1 in the terminal.

Depending on the model size, the download might take some time. While you are waiting, you can already continue with the next step and start setting up your Mendix app.

Once the download has finished, you can test the model directly in the console by running ollama run deepseek-r1 (again, replace deepseek-r1 with the model-id you chose) and then enter a prompt to start a conversation.

3 – Set up your Mendix app

Now that we have successfully set up and tested Ollama, we’re ready to switch to Mendix Studio Pro to make the OpenAI connector compatible with Ollama. A lot of AI providers and platforms offer a REST API that is compatible with OpenAI’s API specification, which is why the OpenAI connector provides the ideal starting point for an implementation.

If you already have a Mendix project in Studio Pro version 9.24.2 or higher that you would like to use, download GenAI for Mendix and the OpenAI connector from the Mendix Marketplace and set up its dependencies. Alternatively, you can start with one of the GenAI starter apps like the AI Bot Starter App, which already contains all required modules and is a great template if you want to build your own ChatGPT-like custom chatbot.

4 – Configure the OpenAI connector

  1. Set up an encryption key by following the steps mentioned in MxDocs.
  2. Afterwards add the module role OpenAIConnector.Administrator to your Administrator user role in the security settings of your app.
  3. Lastly, add the Configuration_Overview page (USE_ME > Configuration) to your navigation, or add the Snippet_Configurations to a page that is already part of your navigation.

5 – Run the app and add your Ollama model configuration

Now run the app, log in as an Administrator and open the OpenAI configuration page that was added to the Navigation. Click the New button to create a new configuration.

Choose a display name and set the Api type to OpenAI. Set the endpoint to http://localhost:11434/v1. Finally, enter 1 or any other string as the token to avoid a validation error when saving. The content of the token string is completely arbitrary, as the local Ollama server is not protected by an authorization method.

Ollama model configuration

After saving the configuration, you will see a new popup with all default OpenAI models. Those won’t work with our Ollama configuration, so you can delete those. Afterwards, we’ll add the local Ollama model as a deployed model to the Mendix app.

Choose a display name and set the model name to the Ollama model-id from their model library. The model overview on Ollama’s website can help to determine the output modality of the model and additional capabilities. For DeepSeek-R1, it should look like the screenshot below.

Ollama overview

Click on save. And close the deployed model popup.

6 – Test the Ollama model in Mendix

To test your new model, hover over the three dots in the Ollama configuration row and select the Test option in the pop-up menu. Select the model you have just created from the deployed model drop-down list and click on the test button. If everything is set up correctly, you will see a success message.

If the test is not successful, check the logs in the console in Studio Pro to view more details and go through the following troubleshooting tips:

  1. Verify that the endpoint and model name were entered correctly. Verify that both don’t contain blank spaces.
  2. If the Ollama server cannot be reached, try restarting it by opening a new terminal and running ollama serve.

The model is now ready to be used in your Mendix app. If you have started with the AI Bot Starter App, take a look at the how-to documentation to complete the setup and start a chat.

Read more on smart apps

If you’re new to GenAI, check out the GenAI showcase app, which demonstrates and explains more than ten different use cases for implementing GenAI in a Mendix app. To get started with the development of an AI-augmented app, in addition to the AI Bot Starter App, Mendix offers various starter apps that can kickstart the development of a smart app as they contain all the necessary models, configuration logic, and basic implementation. Available starter apps include the Support Assistant, which helps users query a knowledge base and create support tickets, or an RFP Assistant, which can be used to answer questionnaires with repetitive questions. See the Mendix documents for an overview of all available GenAI components and apps.

All starter apps are compatible with Ollama models set up with the OpenAI connector as described in this blogpost given that the model you’re running supports the required capabilities such as vision or function calling. Review the model overview on Ollama’s website to filter for models with certain capabilities. Finally, take a look at the additional resources for building smart apps with Mendix.

Connect with us

If you’re working on your own GenAI use case and need assistance or want to provide feedback, we’d love to hear from you. Contact your Customer Success Manager, send us an email or message us in the #genai-connectors channel on the Mendix Community Slack. Sign up here!

Choose your language