Prompt Engineering Techniques: Chain of Thought and Tool Use
Chain of Thought Prompting
Chain of thought (CoT) asks the model to show reasoning steps before giving a final answer. Instead of "What is the answer?", prompt with "Think through this step by step, then give your final answer." This technique improves performance on math, logic, and multi-step reasoning problems by 20-40%. The model explicitly works through intermediate steps rather than jumping to conclusions.
Implementation pattern: request reasoning, then extract the final answer programmatically. The model outputs "Step 1: ... Step 2: ... Final answer: X" and your code parses for text after "Final answer:" rather than using the entire response. This separates the reasoning (useful for debugging) from the actionable output.
Tool Use and Function Calling
Tool use enables models to invoke external functions: database queries, API calls, calculations. Instead of the model hallucinating a stock price, it calls a function that returns the real value. The prompt defines available tools with descriptions. The model outputs a structured tool call. Your code executes the call and returns results to the model for further processing.
Effective tool descriptions are critical. Vague descriptions lead to incorrect tool selection. Include: what the tool does, required parameters with types and constraints, example inputs and outputs, when to use (and when NOT to use) this tool. More detailed descriptions reduce tool selection errors.
Combining Techniques
CoT and tool use combine naturally. The model reasons about what information it needs, calls tools to get that information, reasons about the results, and produces a final answer. This ReAct (Reasoning + Acting) pattern is particularly powerful for complex tasks requiring both knowledge retrieval and multi-step logic.