Set the number of tokens to be predicted (maximum) by the flow.
set_n_predict.Rd
set_n_predict
sets the maximum number of tokens to be predicted by the flow.
Details
This sets the maximum number of tokens to be predicted by the flow. Note that this does not mean that the LLM will constrain its answer to that number, If the planned answer exceeds the n_predict value, the answer will simply stop at that point.
Examples
wflow <- ai_workflow() |> set_connector("ollama") |>
set_n_predict(n_predict=500)
#> → Default IP address has been set to 127.0.0.1.
#> → Default port has been set to 11434.