Skip to contents

set_n_predict sets the maximum number of tokens to be predicted by the flow.

Usage

set_n_predict(workflow_obj, n_predict)

Arguments

workflow_obj

an ai_workflow object created by ai_workflow() in the first place.

n_predict

the number of tokens to be predicted (maximum) by the flow.

Details

This sets the maximum number of tokens to be predicted by the flow. Note that this does not mean that the LLM will constrain its answer to that number, If the planned answer exceeds the n_predict value, the answer will simply stop at that point.

Examples

wflow <- ai_workflow() |> set_connector("ollama") |> 
set_n_predict(n_predict=500)
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.