Skip to main content

AIAgent

The AIAgent tag lets you create and configure AI agents. Agents use LLMs to engage in dialogs and handle any user requests.

Each agent has its own role and goal. For example, an agent can be a translator who works with multiple languages:

state: Start
q!: $regex</start>
a: How can I help you?

state: Translation
q!: * translate *
# The agent that will translate the text
AIAgent:
id = translator_agent
# LLM the agent uses
model = gpt-4o
# Role in the dialog
role = Professional translator
# Goal in the dialog
goal = Translate the text into another language
# Instructions for the agent
instructions = If necessary, ask the user for the target language. Give 3 best translation variants
# The agent will go to the /Start state if the user has no more questions
nextState = /Start

To get started:

  1. Set up connection to Caila.
  2. Specify tag parameters.
  3. Use the tag in the script.

LLMs for agent

The agent connects to LLMs through the openai-proxy service on the Caila platform.

caution

LLMs are available only with a paid Caila plan.

Add a secret to JAICP to get started.

Access token for Caila

To use services and generative models from Caila in third-party applications, including JAICP, you need a personal access token. To issue a token:

  1. Go to Caila.

    tip

    Caila and Conversational Cloud use a shared account base, so if you are registered on Conversational Cloud, you do not need to register additionally on Caila.

  2. Go to the My spaceAPI tokens section.

  3. In the upper right corner, click Create token.

  4. Give a name to the token, generate it, and copy it to the clipboard.

Next, add a secret to JAICP:

  1. Go to JAICP.

  2. In the Secrets and variables section, add a new secret:

    • Name is LLM_API_KEY.
    • Value is the received Caila token.

    You can use a different name for the secret. If you do, specify it in the chatbot.yaml file in the injector.LLM_API_KEY_SECRET_NAME section:

    injector:
    LLM_API_KEY_SECRET_NAME: "MY_API_KEY"

Parameters

General settings

ParameterTypeDescriptionRequired
idString

Agent ID. Each agent in the script must have a unique ID.

caution
A state can have only one AIAgent tag.
Yes
modelString

Specify the LLM that the agent will use.

To access the LLM, the agent uses the openai-proxy service on the Caila platform. You can view available models and their prices on the service page.

Some tag parameters can be used only if the LLM supports function calling: the agent itself can call the necessary functions in the script. These parameters are marked with the function calling label. For a list of models that support function calling, see the OpenAI documentation.

note
The OpenAI website is not available for Russian IP addresses.
Yes
roleStringThe agent role or character in the dialog. For example: Bank employee. Affects the conversation tone and how the agent talks about itself.Yes
goalString

Agent goal. For example: Help the user make an order.

In the dialog, the agent will try to achieve this goal.
Yes
instructionsStringInstructions for the agent. Use it to specify the behavior of the agent. Example: Answer only in the user’s language. First, make sure you understood the question correctly.No
tip

Sometimes an LLM might give unexpected responses or inaccurate data.

We recommend that you fill in the role, goal and instructions parameters with detailed information. This way, you can get more predictable and consistent results.

LLM settings

ParameterTypeDescriptionRequired
temperatureNumber

Adjusts the creativity level of responses. At higher values, the results are more creative and less predictable.

We do not recommend changing the temperature and topP parameters at the same time.

Accepts values from 0.0 to 2.0. Default: 1.0. Recommended: 0.6.
No
topPNumber

Adjusts the diversity of responses. At lower values, the LLM selects words from a smaller, more likely set. At higher values, the response becomes more diverse.

We do not recommend changing the temperature and topP parameters at the same time.

Accepts values from 0.0 to 1.0. Default: 1.0. Recommended: 1.0.
No
frequencyPenaltyNumber

Penalty for the frequency of words. By increasing the value, you reduce the likelihood of words or phrases appearing multiple times in the response.

Accepts values from -2.0 to 2.0. Default: 0.0. Recommended: 0.0.
No
presencePenaltyNumber

Penalty for word and phrase repetition. By increasing the value, you decrease the likelihood of repetitions in the response. All repetitions are penalized equally, no matter how frequently they occur.

Accepts values from -2.0 to 2.0. Default: 0.0. Recommended: 0.0.
No
maxTokensNumberMaximum number of tokens that the model can generate in one iteration. Recommended value: 4000.No

You can learn more about these parameters in the OpenAI documentation.

note

The OpenAI website is not available for Russian IP addresses.

Get data and call functions

ParameterTypeDescriptionRequired
function calling
requiredData
Array of objectsData that the agent must ask the user for.No
function calling
functions
Array of stringsThe agent can call these functions in the script.No

RAG knowledge base

ParameterTypeDescriptionRequired
knowledgeBaseStringName of the knowledge base secret. The agent gets information from this knowledge base.No
knowledgeBaseConfidenceNumber

Knowledge base confidence. The agent only gets those text fragments (chunks) from the knowledge base that have a relevance score above this threshold.

Specify a value from 0.0 to 1.0. Recommended: 0.8.
No

For more information on how the agent works and how to connect the knowledge base, see Use knowledge base.

History and additional data

With these parameters, you can pass the message history and additional data to the agent. This will allow the agent to take into account the context of the dialog.

ParameterTypeDescriptionRequired
chatHistoryEnabledBooleanIf true, the agent will get the bot dialog history.No
chatHistoryLimitNumberThe number of recent messages that the agent will get from the bot dialog history. Default: 50.No
contextObjectAdditional data that the agent can use. Can be in any format.No

Transition to states

If the user is in the dialog with the agent:

Use the parameters below to change the logic of state transitions.

ParameterTypeDescriptionRequired
function calling
nextState
StringThe bot transitions to this state if the agent considers the goal achieved and the user has no more questions.No
function calling
noMatchState
StringThe bot transitions to this state if the agent considers that the user request is not related to the goal.No
intentConfidenceNumberThreshold value for intents and patterns in the script. If specified, for each request to the agent:
  1. The bot executes $nlp.match relative to the script root /.
  2. If the weight of the pattern or intent match is greater than the threshold value, the bot transitions to the state.
No
function calling
llmClassificationEnabled
Boolean

If true, the agent can transition the dialog to other states containing the AIAgent tag.

When selectiong a state, the agent takes into account:
  • The roles and goals of other agents: the role and goal parameters.
  • Names of the states that contain agents.
    caution
    A state can have only one AIAgent tag.
We recommend giving states names that are very different from each other. The agent might make more mistakes if the script contains states with similar names, for example: NoMatch1 and NoMacth2.
No

How to use

Analytics

The agent adds the following comment to each user phrase: Phrase processed by the AIAgent tag.

In the AnalyticsDialogsPhrases section, by this comment you can find the phrases that were processed by the agent.

Example

This example shows a bot that helps users create a card application.

theme: /

state: Start
q!: $regex</start>
script:
// Create a new session
$jsapi.startSession();
// Set user data
$session.userData = {
userId: 111111,
userName: "John"
};
# The bot immediately transitions to the /Card state
go!: /Card

state: Card
# An agent who processes bank card applications
AIAgent:
id = card_application
model = gpt-4o
role = Bank employee
goal = Help the user create a card application
instructions = Respond in the user’s language
# User age, city, and phone number are required to create an application
requiredData =
[
{
"name":"userAge",
"type":"number",
"description":"User's age",
"reasonForQuestion":"Only adults can create application"
},
{
"name":"userCity",
"type":"string",
"description":"User's city",
"condition":"Only if the user is already 18 years old",
"dependsOn":"userAge"
},
{
"name":"userPhone",
"type":"string",
"description":"User's phone number",
"condition":"Only if the user is already 18 years old",
"dependsOn":"userAge"
}
]
# The agent can call a function to create a card application
functions = ["createCardApplication"]
# Passing user data as context
context = {{JSON.stringify($session.userData)}}
chatHistoryEnabled = true
# The bot transitions to /Feedback if the agent achieves the goal
nextState = /Feedback

state: Feedback
InputText:
prompt = Rate the bot performance
varName = feedback

At the beginning of the dialog, the user enters the Card state:

  1. The agent asks the user for their age. If the user is under 18, the agent informs that they cannot issue a card.
  2. If the user is 18 or older, the agent asks for their city and phone number.
  3. The agent asks for confirmation before creating a card application.
  4. If the user confirms, the agent calls the createCardApplication function and reports the application creation.
  5. Since the goal is achieved, the agent transitions to nextState: the Feedback state.
tip

You can use multiple agents in a single script. See the example in the Advanced features article.