How to Build Smarter Apps with Function Calling & Generative AI
Creating dynamic, interactive, and smart applications is a key differentiator for software companies, especially in a rapidly evolving technological landscape. Generative AI (GenAI) offers many opportunities to help organizations achieve this. The challenge comes in extending GenAI capabilities beyond static text generation.
The default text generation capabilities of Large Language Models (LLMs) often fall short. This is because they lack real-time, contextual knowledge and cannot access private data post-training.
Function calling can help to solve complex problems from data retrieval to task automation by making microflows accessible to an LLM. Thus, you can essentially create a virtual agent but in a low-code way.
The ultimate AI assistant
Imagine an AI support assistant doing everything from:
- Answering common questions based on information found in a private knowledge base, to
- Retrieving real-time user data from a database,
- Searching a private knowledge base for solutions, and
- Creating a support ticket on behalf of an employee—all within a single conversation.
This capability would be extremely valuable for software developers tasked with implementation. It would also help product owners who seek to enhance user satisfaction and operational efficiency.
Most importantly, end-users would benefit from streamlined, efficient support processes. This is where function calling—also referred to as tool use—comes into play. It allows AI models to perform a variety of tasks on behalf of the current user.
The video below shows the user interface of our newly released Support Assistant Starter App. The animation is based on the previously described IT support use case. It also serves as an example throughout this post. The Support Assistant Starter App will give you a head start with building a virtual agent. (For more tips on how to get started, see the resources in the “How to get started” section at the end of this blog.)
Let’s take a look at how function calling can help you solve complex problems from data retrieval to task automation. We’ll also cover how to integrate platform-supported GenAI connectors into your Mendix applications. We’ll provide you with practical examples and actionable insights to leverage the full potential of GenAI to build smarter apps.
What is function calling?
Function calling is an optional feature of text generation capabilities, also known as chat completions. It allows an LLM to connect to external tools and systems.
Function calling is a standard pattern supported by many LLMs:
- OpenAI ChatGPT
- Anthropic Claude
- Meta Llama
- Cohere Command.
Allowing an LLM to connect to external tools and systems can be valuable for many different use cases, such as:
Data retrieval
- Accessing real-time data (e.g. live production metrics), or domain-specific information, (e.g. internal documentation), to provide relevant responses
- Retrieving data about the current user so they can ask questions about their account or recent activities
- Retrieving real-time data from public external APIs. For example, stock prices or weather data to answer questions such as: “What will the weather be like tomorrow in Rotterdam?”
Triggering actions
- Creating objects like orders based on a chat conversation
- Kicking off workflows in internal or external systems by using the LLM to interpret complex requests
- Performing calculations based on unstructured data
- Interacting with objects on a page by making changes or asking for something to be displayed
How to use function calling in Mendix
The good news is that developers don’t need any additional deployments, tools, services, or skills to use function calling in Mendix. Everything can be configured inside Mendix Studio Pro with our platform-supported GenAI connectors.
The diagram below outlines how function calling from a Mendix app works.
Step 1
Functions in Mendix are essentially microflows that can be described and registered as part of the request to the LLM. In general, adding functions to a request is optional. If a request does not contain functions, steps two through five are not applicable and the LLM directly generates an answer based on the user prompt as shown in step six.
Step 2
Based on the current context and function descriptions, the model decides if any predefined functions (microflows) should be called to gather more information or take action based on the user’s message. This is known as the user prompt
Step 3
The LLM does not call a function itself. Instead, it returns a structured response with instructions on which functions to execute by the Mendix app and their required input parameters.
Steps 4 and 5
The function microflows are then executed inside the Mendix app. Their return values are sent back to the LLM to add to the initial user prompt being processed.
This process, visualized in steps two through five in the diagram below, repeats until the LLM returns the final response in the next step.
Step 6
The LLM returns the final assistant response.
The chat completions operation in the platform-supported GenAI connectors handles all six steps automatically. This allows developers to focus only on creating function microflows to add to the request, while the rest is managed for them.
Mendix developers are in full control
In a Mendix app, function microflows are executed in the context of the current user. This allows developers to make sure that only data that is relevant for the current user is retrieved and exposed. Therefore, the Mendix app has full control over what microflows are called and what data is used by the LLM.
Users will have fine grained access over data security and action validation. Besides the retrieval of data, users can also trigger actions based on their prompt, allowing them to interact with the current page.
For actions that can affect the real world, like creating objects, triggering microflows, or updating UI elements, it’s a good idea to include a confirmation step to keep the user involved. You don’t want your assistant accidentally ordering 100 pizzas when you meant to order 10! (Other examples include sending an email, submitting an order, or booking an appointment.)
Both the OpenAI Connector and Amazon Bedrock Connector support chat completions with function calling by leveraging the GenAI Commons module. These modules provide easy-to-use operations to add functions to a request.
The microflow below shows an example of how:
- A request object is created
- Multiple functions are added
- A chat completions operation with your preferred AI provider, such as Amazon Bedrock or (Azure) OpenAI, is executed
- The assistant response is updated so that it can be shown on a page
When using a chat completions operation with a platform-supported GenAI connector, developers just need to add the function microflows to the request. The connector will take care of the response and run the function microflows until the LLM provides the final assistant response.
How is function calling used in the support assistant starter app
Let’s take a closer look at the Support Assistant Starter App. We want to see how function calling was essential in creating an engaging and interactive user experience.
Usually, employees visit an IT support application because they have an IT-related problem. Here, support chatbots are supposed to reduce the overhead on first-line support. They do this by reducing the number of tickets created and the resources needed to process them.
By using function calling we’ve given the support assistant the opportunity to choose to do the following:
Query a knowledge base with manuals
Here, the LLM queries a private knowledge base with static manuals and guides containing solutions to common IT problems. This is basically Retrieval Augmented Generated (RAG) on static textual data wrapped in a function.
Query a knowledge base containing resolved tickets
This function looks through resolved tickets with similar descriptions to suggest possible solutions. Unlike the first function, which uses static free text data, this one works with Mendix data that is object-related and can change over time, known as RAG.
If the user solves their problem with a solution from the manuals or resolved tickets, the chat will end, and no ticket will be created. This saves time for the user as well as for the help desk employee.
Check for similar open tickets of the user and update ticket
Before creating a new ticket, check if the same employee has already opened a ticket for a similar problem. If they have, you can use the update ticket function to add the new information to the existing ticket.
If none of the earlier steps or solutions from closed tickets or the knowledge base work, the assistant can ask the user if they want to create a new ticket. This helps users include important details, making it easier for support staff and reducing extra work.
Create a new ticket
This function creates a ticket for the user using the information already provided in the conversation. This leads to high-quality tickets that capture all relevant details, allowing for faster ticket resolution.
When defining functions, it’s essential to clearly explain their purpose and when to use them. The description serves as a prompt for the LLM, indicating when to call the function, what input it needs, and how to interpret its return value.
You can include general guidelines on the order of function execution in the system prompt. For example, the support assistant should always check the knowledge base or resolve tickets for solutions before creating a new ticket.
Why you should use function calling in your Mendix app
When it comes to data retrieval, you can use either prompt engineering or function calling. However, function calling has some clear advantages:
- It’s hard to predict what information will be relevant in future conversations, which can lead to adding too much information to a system prompt. While function calling still requires you to define functions upfront, it prevents irrelevant information from cluttering the chat context. The LLM can then determine which data to retrieve and include as needed.
- If you add unnecessary information to the system prompt, the LLM may lose focus on what’s important. This can result in unexpected behaviors and responses.
- Additionally, users are typically charged based on the number of input and output tokens in a request. Including a lot of extra data makes the prompt longer, increasing input tokens and raising costs unnecessarily.
Function calling helps solve these challenges by providing only the data needed at the right time, ensuring it’s always up to date.
In other cases where actions need to be triggered inside functions, prompt engineering doesn’t work well. Function calling offers new opportunities that would typically require agents or assistants but can be done in a low-code way.
The benefits of using function calling with the platform-supported Mendix connectors based on the GenAI Commons module include:
- Microflows can be executed inside the Mendix without exposing them via an API to make them available from external systems.
- There is no need to write code outside of Mendix, making function calling a true low-code experience
How to get started
Check out the new Support Assistant Starter App to help you quickly build a virtual agent based on the use case we discussed. It’s also a great way to learn how to implement function calling in Mendix.
If you’re new to adding GenAI capabilities to a Mendix application, we highly recommend exploring the GenAI showcase application. It features a range of inspiring use cases, from simple chat implementations using the Conversational UI module to more advanced features like function calling, retrieval-augmented generation (RAG), vision, and image generation.
Both the Support Assistant Starter App and the GenAI showcase app use platform-supported connectors for two popular AI providers: (Azure) OpenAI and Amazon Bedrock. You can easily switch between these providers and their models, making it simple to test and compare them side by side.
These connectors are built on the GenAI Commons module, which provides a shared domain model and reusable actions for registering functions and adding them to requests. The connectors handle function call responses from the LLM and execute the functions based on the current user’s context.
Be sure to check out the overview of models that support function calling and review the technical documentation.
If you’re working on your own GenAI use case and need assistance or want to provide feedback, we’d love to hear from you. Contact your Customer Success Manager or message us in the #genai-connectors channel on the Mendix Community Slack. Sign up here!