How to get started with AI

Applying AI might sound complex, but it doesn’t have to be.

Here are three examples of how we can get your organisation easily started with AI. All of them are based on LLM (large language models) such as ChatGPT and other similar tools.

Hero mask
<h1>How to get <span class="text-athletic-green">started with AI</span></h1>
<p>Applying AI might sound complex, but it doesn’t have to be.</p>
<p>Here are three examples of how we can get your organisation easily started with AI. All of them are based on LLM (large language models) such as ChatGPT and other similar tools.</p>

1. Get a customized content generator: Build Custom UI on top of LLM

We can develop a customized content generator for your company that gives you great user experience, delivers high quality output, and ensures that data is kept private.

Many people are already using ChatGPT or similar tools, where you can ask questions and get answers. However, many companies want to make sure that their data is kept private and not used by OpenAI or other providers to train their models.

Moreover, it can also make sense to wrap this basic chat functionality in a UI, that makes the process more user friendly and creates output of a higher quality.

A simple question and response to ChatGPT can be a very powerful technique. But implementing a custom UI on top of the LLM will allow you to control the input and output, giving you a much better user experience and a much better result.

The technologies used in our content generator make it very easy to extend and adjust in any way that suits your company.

2. Enrich your chat with company specific data: Using Retrieval Augmented Generation (RAG)

The overall concept of RAG is simple, and we can quickly set up a POC (proof of concept) that illustrates the power of this technique.

In short, RAG is a technique that allows us to use a pretrained LLM – such as ChatGPT – but enables the LLM to answer questions specific to your data. An LLM enriched with your organisation’s own dataset will ensure responses that are both relevant and current for your business. Next step is to ensure that quality, observability, and security meet the requirements of your company.

The way RAG operates is by taking a query (for example a question from a user), searching the indexed data for relevant information, and then using that information together with the initial question as the basis for generating answers. This not only improves the accuracy of responses, but also ensures they are tailored to the specific context of the inquiry leading to more informed decision-making when based on the most relevant internal and external sources.

3. Go beyond basic chat functionality: Function calling

We use ‘function calling’ as a powerful technique in our implementation of an LLM based application.

‘Function calling’ serves as a bridge between LLMs and other systems or data sources and can be thought of as a way to enhance the capabilities of an AI system by enabling it to interact with other software tools or databases.

You can imagine ‘function calling’ as a team member who can not only understand and respond to your requests, but also reach out to other team members or resources to gather information or perform tasks. This interaction is managed through a set of instructions that the AI follows to achieve the desired outcome, much like asking a colleague to fetch a file or update a record.

As such, ‘function calling’ can be used to trigger actions in other systems, such as updating a database, sending notifications, or posting content online.