Hey ChatGPT, Write a Blog Post on Prompt Engineering [Part 2]

In this series, we are looking at how to master the art of crafting precise and effective prompts through practice and experimentation. In part one, we covered the different prompt types, best practices for prompt creation, and the five top prompt techniques. For part two, we’re going to explore Tree-of-Thought prompting as well as how to start prompting – especially if you’re new to this.

Use case Tree-of-Thought prompting

The Support Assistant Starter App is a Mendix App that can be downloaded and adjusted to your requirements. It uses Retrieval Augmented Generation (RAG) and Function Calling to connect to your own database, making this use case a great example for prompting (for now – more to come!).

The Support Assistant App was created to demonstrate how GenAI can be used in a company as a support assistant targeting employees encountering IT-related inquiries. It provides solutions based on the static data or previous similar issues and creates a support ticket for the IT help desk when the inquiry cannot be solved solely by the chatbot. Similarly, if the user needs a new license or hardware, the bot can open a new request for the user and facilitate the experience. For more information, look at the animation below.

To achieve the Support Assistant App capabilities, nine function microflows have been created that the LLM can call to help generate a response. To determine which function needs to be addressed in the specific scenario, the prompt plays a crucial role. Consequently, we will focus on the relevant parts of the prompt that we encourage you to consider in your next GenAI-enriched Mendix App.

To read the full prompt, please download the Support Assistant Starter App and search for the TicketBot_ActionMicroflow microflow, located in the Create Request from ChatContext activity and look for the SystemPrompt in the parameters.

As a starting point, it is crucial to assign the bot a role and provide initial information. In the Support Assistant App example, we assign it the persona of an assistant with access to a private database for inquiries.

You are a helpful assistant supporting the IT department with employees' inquiries, such as support tickets, licenses (e.g., Mendix) or hardware (e.g., Computer) requests. 
Use the knowledge base and previous support tickets as a database to find a solution to the user’s request without disclosing sensitive details or data from previous tickets.

Next, we start providing some details on how the desired output should look like, including the language, tone, and format.

The user expects direct, and clear answers from you. 
Use language that users who might not be familiar with advanced software or hardware usage can understand. 
The text will be rendered in markdown ...

Now that the bot has its ‘personality,’ we move to the instructions using the tree-of-thoughts as displayed in the image below. For this, we enumerate each scenario to make it easier for the LLM to follow the instructions.

For instance, as a first case, we consider the vague or incomplete requests. We want to prevent the AI model from interpreting the user’s input, as it might provide a false response. Therefore, we include:

Never assume the user’s issue based on a vague input. 
If the user’s input is not clear or seems incomplete, you must first ask the user for more details to clarify the problem...

Do not proceed to suggest solutions or actions until the user’s issue is clearly understood.

Some of these types of requests can be provided, along with example responses, such as:

Example response: "Could you please provide more details about the issue you are experiencing?"

Optionally, you can limit the topics the bot can address by including the following in your prompt with an example response.

Avoid answering questions about cooking, music, or dancing. 
Example response: "Your request is not an IT topic. I am here to assist you on support ticket and new licenses or hardware matters."

Once the basics are established, we can shift the guidelines to the specific use case. In this case, the user’s requests instructions, where the available requests are handled.

There are two types of requests - requesting a new license or hardware, or IT support.

Subsequently, we can dive deep into the specific instructions for each request and mention the function name that the model should call if the requirement is fulfilled.

If the user asks for help with an issue (e.g., battery drain, connection problems, etc.), call the RetrieveStaticInformation function to provide a solution from the knowledge base.

Including an alternative section in case the requirement is not fulfilled is where the tree-of-thought approach plays a role here as displayed in the image above. In the Support Assistant App example, whenever the condition to call the specific function does not meet, the LLM continues to the next condition until it has a precise function to address.

For instance, if the information is not found in the RetrieveStaticInformation function, it continues calling the RetrieveSimilarTickets one. Only when none of the above meets the conditions, it moves to the next function and creates a new support ticket. Considering that the prompt can be a list of instructions, it is possible to reference future commands to prevent several instruction repetitions resulting in a long prompt. An example for this reference is:

If none of the options above help the user, proceed to step 3 Handling New Support Tickets.

The third step, which focuses on handling new support tickets, can again have a list of orders for the model to follow. As demonstrated in the demo, the AI model facilitates the creation of the new tickets. Therefore, you can add guidelines for the model relevant to this step, such as:

Write the ticket description from the user’s perspective.
If you do not agree with an update request (e.g., the user wants to categorize a Miro license request as a hardware issue), explain to the user why you do not recommend that change. 
However, if the user insists, you can make the change.

Naturally, you want to include certain limitations or rules to be followed each time, as the graph displays in Additional Rules. The focus of this step could not be included in one of the previously mentioned categories since these are considered more general rules that the bot must follow.

For instance, app limitations (i.e., if the bot does not have the permission to perform certain activities, such as saving or deleting a ticket, and should inform the user to do it manually) or function limitation (i.e., what to do when the bot is not sure which function to call).

It might sound and look overwhelming for a newcomer at first, but it is essential to have a structure to rely on. So, whenever new functions, instructions or rules need to be added to the prompt, it is easier to fit them. In the next section we will give additional tips on how to start your journey.

How to start: Prompt without limits

Starting to prompt might feel challenging at this. However, there are a few recommendations that you can follow to feel more confident on this journey. This includes:

Start small and focus on experimentation

There is no perfect prompt done at the first trial. Focus on writing a few lines to have an idea how the AI model responds to your command. This will be relevant for the next steps.

Find a use case and collect all requirements

Without having a use case in place, or knowing what the expectations for the LLM are, it can be difficult to simply prompt and expect the model to give the right answers. Therefore, make a list of requirements and requests.

From the scenarios to the prompt creation

Having all scenarios for the use case written down and what is expected from the prompt can help with the creation of the first prompt draft. Consider the prompt types, best practices, and if necessary, the prompt techniques.

Iterative refinement (testing, and more testing)

Test all written scenarios to evaluate the created prompt. Try to write the questions in several forms to assess the responses..

External review

For further improvements to the model, you might need some external feedback. Having a different set of eyes might bring new issues that can be addressed in the prompt.

In case the output is not as expected

The AI model might not be providing you with the desired output, but do not worry. Where there is a problem, there is (mostly) a solution. Here are a few of them:

  • Change the model variant: Updating the model to higher variants can have a high impact on its performance. For example, choosing GPT4o over GPT3.5 might outperform in most tasks due to its improvements in the architecture, training, and overall capabilities.
  • Find alternatives: Depending on the use case, finding alternatives could improve the performance by re-organizing, simplifying, or reducing the number of instructions in the prompt. For instance, in cases that uses function calling, creating a new function instead of adding a new instruction to the prompt might be a solution.
  • Test by steps: Identifying where your prompt is failing is another alternative to knowing why the output is not what is expected. By asking the LLM a bunch of questions for each instruction given and identifying where the AI model might not be performing, it can help to have a focus area of improvement.
  • Start from ‘zero’: Sometimes scratching the prompt outline from a different viewpoint or changing the prompt technique could boost the performance. For instance, if the order of the instruction is changed, or you decide to opt-in for a three-of-thoughts approach.
  • Change the scope of the project: Although this might be the least desired one, if you are encountering many issues, it might be a suggestion to readjust the scope of the project to assure the accuracy of the responses. An example, for a project like the Support Assistant App, could be having two prompts, where one handles the support tickets, and the other one, the license or hardware applications.

Going forward: Crafting prompts for success

With all this knowledge, you can start your journey of prompt engineering and re-visit this blog as many times needed for guidance and tips. That being said, feel free to revisit the Essential Best Practices for Prompt Creation discussed previously in the first post of this series. Best of luck!

If you are working on your own GenAI use case and need help or would like to provide feedback, we would love to hear from you. Get in touch with your Customer Success Manager or send us a message in the #genai-connectors channel in the Mendix Community Slack. Sign up here!

Additional resources to check out: Prompt library.