Skip to contents

get_ollama_completion get a completion from the ollama server API

Usage

get_ollama_completion(
  ollama_connection,
  model,
  prompts_vector,
  output_text_only = F,
  num_predict = 200,
  temperature = 0.8,
  system_prompt = NA
)

Arguments

ollama_connection

a connection object that has the information on how to connect to the ollama server

model

Name of the model to use on the ollama server. Note that you can only use the models that are available.

prompts_vector

A vector containing one or more messages acting as prompts for the completion.

output_text_only

A boolean value (default False) indicating if you just want the text message as output (TRUE) or the whole response coming from the server.

num_predict

The number of tokens to generate in the response (maximum amount)

temperature

The temperature value for the answer of the model. A temperature of 0 gives always the same answer. A temperature of 1 has a lot more variation. Default is 0.8.

system_prompt

The system prompt used for your LLM.

Details

A simple function to test the connect to ollama