Skip to contents

Load the package at first.

Setting up your Ollama connection

Ensure that you have a running Ollama instance on your local machine or somewhere available in your network. Installing Ollama is very easy and works on multiple platforms. Ollama will work even on machines equipped only with CPUs, while having machines with GPUs (especially Nvidia) will make processing much faster. Once you have Ollama installed you need to download at least one model to get started.

You can establish a connection using the below function.

By default it will use localhost and the 11434 port. If you want to use a different setup, you can change such parameters:


conn <- get_ollama_connection(ip_ad = "127.0.0.1",port = "3524")

Your First Workflow

This is one of the most simple workflows you can make. To create a workflow, you need to start with the ai_workflow() container command, and then pipe instructions to it. In the below example we specify that we want to use the ollama connector, and that we will use the llama3.1 model.


wflow_basic <- ai_workflow() |>
  set_connector("ollama")  |>
  set_model(model_name= "llama3.1:8b-instruct-q5_K_M") 
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.

When selecting the ollama connector, it will use the default connection parameters. You can however set arbitrary IP and port parameters as described below, if you want to connect to an Ollama instance that is living on a different machine.


wflow_basic_on_different_machine <- ai_workflow() |>
  set_connector("ollama")  |>
  set_ip_addr(ip_addr = "192.168.1.12") |>
  set_port(port = 5256) |>
  set_model(model_name= "llama3.1:8b-instruct-q5_K_M") 
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.
#> → IP address has been changed to 192.168.1.12.
#> → Port has been changed to 5256.

At this stage your workflow exists, but does not do anything.

The next steps is to ask it to run some specific tasks. We can ask with a simple prompt.


wflow_basic |> 
  process_prompts(prompts_vector = "why is the sky blue? Answer with a short explanation") |>
  pull_final_answer() |> cat()
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Temperature was not specified and given a default value of 0.8.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode
#> The sky appears blue because of a phenomenon called scattering, where shorter (blue) wavelengths of light are scattered more than longer (red) wavelengths by tiny molecules of gases in the atmosphere. This scattering effect gives our sky its distinct blue color!

By default the model answers with a list, so you want to use the pull_final_answer() function to fetch the final textual answer from the list.

Customizing Output

You can now leverage more features from the package. Such as setting the audience for your answers. Here we specify that the audience is 5 years old kids.


wflow_eli5 <- ai_workflow() |>
  set_connector("ollama")  |>
  set_model(model_name= "llama3.1:8b-instruct-q5_K_M") |>
  set_audience("Five years old kids")
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.

You can see how it changes the output. Don’t expect a great explanation!


explanation_eli5 <- wflow_eli5 |> 
  process_prompts(prompts_vector = "why is the sky blue? Answer with a short explanation") |>
  pull_final_answer() 
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Temperature was not specified and given a default value of 0.8.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode

This is the kind of explanation you get for the little kids out there:

*“Let me tell you a SECRET about the SKY!

The sky looks BLUE because of something called LIGHT! When sunlight comes from the sun, it’s like a big bunch of colorful rays shining towards us.

And guess what happens when these light rays travel through our air in the atmosphere? They get SCATTERED and start bouncing around everywhere!

Blue is one of those colors that gets scattered more than any other color. So when we look up at the sky, all those blue light rays bounce back to our eyes, making it LOOK BLUE! Isn’t that COOL?!“*

Note that you can also change an existing workflow directly by piping parameter setting into it. For example let’s modify the existing workflow wflow_basic before calling the prompt:


explanation_low_tech <- wflow_basic |> 
  set_audience("people without scientific knowledge or background") |>
  process_prompts(prompts_vector = "why is the sky blue? Answer with a short explanation") |>
  pull_final_answer() 
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Temperature was not specified and given a default value of 0.8.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode

You should get something like that:

*“The sky appears blue because of something called light scattering.

When sunlight enters our atmosphere, it’s made up of all different colors like a big ol’ rainbow. But when these tiny particles in the air (like dust and water vapor) bounce off those colorful lights, they scatter shorter wavelengths more than longer ones. And guess what? Blue is one of those short-wavelength colors!

So, as our eyes see this scattered light from every direction above us, we perceive it as a big blue sky!“*

You can also set a specific tone or personality to answer a question.


wflow_snoop <- ai_workflow() |>
  set_connector("ollama")  |>
  set_model(model_name= "llama3.1:8b-instruct-q5_K_M")|>
  set_style_of_voice("Snoop Dogg")
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.

snoop_answer <- wflow_snoop |>
  process_prompts(prompts_vector = "Explain how the stock exchange works in a short paragraph") |>
  pull_final_answer() 
#> → Frequency Penalty was not specified and given a default value of 1.
#> → Presence Penalty was not specified and given a default value of 1.5.
#> → Repeat Penalty was not specified and given a default value of 1.2.
#> → Temperature was not specified and given a default value of 0.8.
#> → N_predict was not specified and given a default value of 200.
#> → Mode was not specified and 'chat' was selected by default.
#> → System Prompt was not specified and given a default value of 'You are a helpful AI assistant.'.
#> → Chat mode

You should get something like that:

“Yo what’s good fam? Alright so you wanna know ‘bout da stock exchangah, right? Okay lemme break it down for ya. See, it’s like one big ol’ party where investors come to buy and sell shares of companies they think is gonna be hot in the future. They put their money on these stocks, kinda like bettin’ on a sports team or somethin’. If da company does good, da stock price goes up and you make some dough! But if it tanks… well, let’s just say you might wanna sit this one out, G. The exchange is where all the buyin’ and sellin’ happens, kinda like an online market but with real people makin’ deals face-to-face or over phone calls. Word.”