<h1 align="center">
<a href="https://prompts.chat">
Ollama models don't have native function/tool calling support like OpenAI. When using Ollama with agents, the model may not generate responses in the expected format, leading to parsing errors.
Sign in to like and favorite skills
Ollama models don't have native function/tool calling support like OpenAI. When using Ollama with agents, the model may not generate responses in the expected format, leading to parsing errors.
We've improved the MRKL agent's parseOutput function to be more flexible in detecting:
When creating an agent with Ollama, provide explicit instructions about the expected format:
systemPrompt := `You are a helpful assistant that uses tools to answer questions. IMPORTANT: You must follow this exact format: For using a tool: Thought: [your reasoning] Action: [tool name] Action Input: [tool input] For final answer: Thought: I now know the final answer Final Answer: [your answer] Always use "Final Answer:" to indicate your final response.` agent := agents.NewOneShotAgent( ollamaLLM, tools, agents.WithSystemMessage(systemPrompt), )
Some Ollama models work better with agents than others:
Lower temperature often helps with format consistency:
llm, err := ollama.New( ollama.WithModel("llama3"), ollama.WithOptions(ollama.Options{ Temperature: 0.2, // Lower temperature for more consistent formatting }), )
The improved parser now handles these variations:
package main import ( "context" "fmt" "log" "github.com/tmc/langchaingo/agents" "github.com/tmc/langchaingo/llms/ollama" "github.com/tmc/langchaingo/tools" ) func main() { // Create Ollama LLM with appropriate settings llm, err := ollama.New( ollama.WithModel("llama3"), ollama.WithOptions(ollama.Options{ Temperature: 0.2, NumPredict: 512, }), ) if err != nil { log.Fatal(err) } // Create tools calculator := tools.Calculator{} // Create agent with clear instructions systemPrompt := `You are a helpful math assistant. Use the calculator tool for computations. Format your responses as: - For calculations: "Action: calculator" then "Action Input: [expression]" - For final answers: "Final Answer: [result]"` agent := agents.NewOneShotAgent( llm, []tools.Tool{calculator}, agents.WithSystemMessage(systemPrompt), agents.WithMaxIterations(5), ) // Create executor executor := agents.NewExecutor( agent, agents.WithMaxIterations(5), ) // Run the agent result, err := executor.Call( context.Background(), map[string]any{ "input": "What is 25 * 4?", }, ) if err != nil { log.Printf("Error: %v", err) } else { fmt.Printf("Result: %v\n", result["output"]) } }
// Test function to verify Ollama agent works correctly func TestOllamaAgent(t *testing.T) { ctx := context.Background() llm, err := ollama.New( ollama.WithModel("llama3"), ) require.NoError(t, err) calculator := tools.Calculator{} agent := agents.NewOneShotAgent( llm, []tools.Tool{calculator}, agents.WithMaxIterations(3), ) executor := agents.NewExecutor(agent) testCases := []struct { input string expected string }{ {"What is 2+2?", "4"}, {"Calculate 10*5", "50"}, {"What is 100 divided by 4?", "25"}, } for _, tc := range testCases { result, err := executor.Call(ctx, map[string]any{ "input": tc.input, }) if err != nil { t.Logf("Warning: %s failed: %v", tc.input, err) continue } output := fmt.Sprintf("%v", result["output"]) if !strings.Contains(output, tc.expected) { t.Errorf("Expected %s in output, got: %s", tc.expected, output) } } }
These improvements make Ollama models more reliable when used with agents, though they still require careful prompt engineering compared to models with native function calling support.