Tool Calling
tool_calling.Rmd
Load the package.
We are going to see in this vignette how you can declare tools to be used as part of a LLM workflow.
Tool Preparation
What are tools exactly? Let’s not try to make things more complicated than they are - tools are actually just functions that you can call. The idea is that some of the LLMs are trained during their development process on how to call functions in case they want to get specific information that is not part of their knowledge base. Not all LLMs support tool calling - you can use for example Llama 3.1 and Llama 3.2 for tool calling at the moment.
The first step to be able to assign a tool is to create the tool itself! Let’s write a simple function. This will use the disastr.api package from CRAN for this example. This function will get a list of recent disasters happening around the world.
get_recent_disasters <- function(type_disaster=NA, country_filter=NA, ...) {
library(disastr.api)
res <- disastr.api(limit = 100)
if (!is.na(type_disaster)) {
type_disaster <- stringr::str_remove(type_disaster, "s$")
res <- res |> dplyr::filter(grepl(type_disaster,event,ignore.case=T))
}
if (!is.na(country_filter)) {
res <- res |> dplyr::filter(grepl(country_filter,country,ignore.case=T))
}
res_f <- res |> jsonlite::toJSON()
return(res_f)
}
Note that when you write a function for a LLM, you need to ensure that:
- the function returns data in JSON format
- the function is loaded in the global environment first before the LLM tries to call it
In the above function, we have added filters such as:
- type_disaster: the type of disaster to filter results by, if needed.
- country_filter: the country of interest, if any.
Note that we have also added a “…” extra argument. This is on purpose. This means the function will also accept as input other parameters beyond type_disaster and country_filter. The reason why we are doing this, is to take care of hallucinations. LLMs have a tendency to come up with additional arguments that do not exist for a given function (even when you give them exactly the list of arguments accepted) and adding this “…” parameter will make it possible for the function to ignore such additional parameters without failing and returning an error.
Note that smaller models are more prone to hallucinations when it comes to tool calling.
Tool Declaration
Now that we have our function, we need to prepare the declaration. We declare it as a list in R. You can follow the format below, as advised in the case of Llama3.1 and beyond:
tool_list <- list(
list(type="function",
"function"=list(
name="get_recent_disasters",
description="get information about recent disasters that are happening or happened worldwide",
parameters= list(
type="object",
properties = list(
type_disaster=list(
type="string",
description="the type of disaster to search for. Make sure this is the singular version of the word"
),
country_filter=list(
type="string",
description="a specific country you want to filter results for, related to disasters"
)
)
)
))
)
As you can see, you need to specify what the arguments mean, what the function does, so that the LLM can grasp when it is a good time to use it. Note that you are not limited to declaring a single function. You can have several functions (tools) as part of the above declaration. This way, you can increase the capability of your workflow to handle different types of requests.
Building a workflow with tools
Now let’s build a workflow that will integrate this tool calling
capability, using the add_tools_declaration()
function:
wflow_tool <- ai_workflow() |>
set_connector("ollama") |>
set_temperature(0) |>
set_model(model_name= "llama3.2:latest") |>
set_system_prompt("you are an AI assistant capable of research recent disasters information with a tool connected to the Internet.") |>
add_tools_declaration(tools = tool_list)
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.
Now that the workflow is ready, we can try it out. While you won’t see the internal details, what is actually happening is that the LLM will first do function call, to confirm what happened in Mexico first, and based on the information it received from the function, the LLM will formulate a second answer (the final one) that uses that info.
wflow_tool |>
process_prompts("Tell me what recent disasters have happened in Mexico?") |>
pull_final_answer()
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → Chat mode
#> → Adding tools
#>
#> The disastR.api package may be cited as:
#> Dworschak, Christoph. 2021. "Disastr.api: Wrapper for the UN OCHA
#> ReliefWeb Disaster Events API." R package. CRAN version 1.0.6.
#> Your disaster event data request was successful.
#> [1] "Based on the information I have access to, here are some recent disasters that have occurred in Mexico:\n\n1. Hurricane Beryl (June 2024): A tropical cyclone that affected several countries in the Caribbean and Central America, including Mexico.\n2. Earthquake (February 2023): A magnitude 7.6 earthquake struck southern Mexico, causing widespread damage and loss of life.\n\nPlease note that my knowledge cutoff is December 2023, so I may not have information on more recent disasters that occurred after this date."
You can see the difference versus the same kind of workflow, with no tools support:
wflow_no_tool <- ai_workflow() |>
set_connector("ollama") |>
set_temperature(0) |>
set_model(model_name= "llama3.2:latest")
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.
wflow_no_tool |>
process_prompts("Tell me what recent disasters have happened in Mexico?") |>
pull_final_answer()
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode
#> [1] "I'll provide you with some information on recent natural disasters that have occurred in Mexico:\n\n1. **Hurricane Patricia (2015)**: A Category 5 hurricane made landfall in Jalisco, Mexico, causing widespread damage and flooding.\n2. **Earthquake in Puebla (2017)**: A magnitude 7.1 earthquake struck the state of Puebla, killing over 300 people and injuring many more.\n3. **Hurricane Odile (2014)**: Although not as destructive as Patricia, Hurricane Odile still caused significant damage to coastal areas in Baja California Sur.\n4. **Floods in Oaxaca (2020)**: Heavy rainfall led to severe flooding in the state of Oaxaca, affecting thousands of people and causing widespread destruction.\n5. **Volcanic eruptions in Mexico City (2019-2021)**: The Popocatépetl volcano erupted several times between 2019 and 2021, forcing"
As you can see, if you don’t provide any tools, the LLM will use whatever memory it can recollect from its training (or whatever it can hallucinate…).
Offline Tools
While you often see examples online of tools that are used to connect to API or online sources to pull information, it does not have to be this way. You can for example build a tool that will support calculations and math operations, since LLMs are notoriously bad at that (for good reasons, they don’t embed the concept of numbers, only tokens).
Let’s first declare a function to do maths:
do_math <- function(expression_to_evaluate) {
res <- eval(parse(text=expression_to_evaluate))
return(paste0("{'expression':'",expression_to_evaluate,",'result':'",res,"'}"))
}
Don’t expect too much from the above function. It will only work for simple operations. It won’t solve binomial equations or something.
Now we declare our math tool:
tool_list <- list(
list(type="function",
"function"=list(
name="do_math",
description="Do simple math calculations by providing a math expression to evaluate",
parameters= list(
type="object",
properties = list(
expression_to_evaluate=list(
type="string",
description="the mathematical expression to evaluate, without an equal sign"
)
),
required=list("expression_to_evaluate")
))
)
)
Let’s ask Snoop Dogg to answer a math problem:
myflow_math <- ai_workflow() |>
set_connector("ollama") |>
set_model(model_name= "llama3.2:latest") |>
set_n_predict(1000) |>
set_temperature(0.2) |>
set_style_of_voice("Snoop Dogg") |>
add_tools_declaration(tools = tool_list)
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.
myflow_math |>
process_prompts(prompts_vector = "Can you help me solve this math problem? How much is 1321212* 3 , and dividing this whole thing by 7 in the end?") |>
pull_final_answer()
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode
#> → Adding tools
#> [1] "Yo, what's good fam? So you got a math problem that's like, \"Hey Snoop, can you help me out?\" Alright, let's get to it.\n\nSo, we gotta multiply 1,321,212 by 3 first, ya dig? That's like, 3 times the number, foo'. And then we divide that whole thing by 7. Word.\n\nAlright, so... (1321212 * 3) = 3,966,336\n\nNow, let's get to dividing it by 7...\n\n(3,966,336 / 7) = 566,233.71\n\nSo there you have it, my G! The answer is like, 566,233.71. You feel me?"
If you take a calculator, you will see that you expect the answer to be 566233.71 more or less. The above LLM equipped with the tool should give you the right answer.
Now you can see the difference it makes when not having it equipped with the same tool:
myflow_clueless_at_math <- ai_workflow() |>
set_connector("ollama") |>
set_model(model_name= "llama3.2:latest") |>
set_n_predict(1000) |>
set_temperature(0.2) |>
set_style_of_voice("Snoop Dogg")
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.
myflow_clueless_at_math |>
process_prompts(prompts_vector = "Can you help me solve this math problem? How much is 1321212* 3 , and dividing this whole thing by 7 in the end?") |>
pull_final_answer()
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode
#> [1] "Yo, what's good fam? It sounds like you're tryin' to get that math done, know what I'm sayin'? Alright, let's break it down. You gotta multiply 1,321,212 by 3 first, then divide the whole thing by 7.\n\nSo, ya do that multiplication, and... (pauses for a sec) ...you're lookin' at somethin' like this: 1321212 * 3 = 3963646. Word up!\n\nNow, you gotta take that result and divide it by 7. So, we got 3963646 ÷ 7 = 566522.\n\nThere ya have it, homie! The answer is 566,522. You're all set now, ain't nothin' to worry 'bout."
The answer differs somewhat. In this case, it’s not too far, while still incorrect.
So our LLM equipped with tools will typically perform better across a range of different problems.