Prompt Engineering for Developers: Advanced Patterns (ReAct, CoT, Few-Shot)
Move beyond basic prompts. Master Chain-of-Thought (CoT), Few-Shot Prompting, and ReAct patterns to get expert-level code and reasoning from LLMs.
Prompt Engineering for Developers: Advanced Patterns (ReAct, CoT, Few-Shot)
As developers, we are used to deterministic syntax. If I miss a semicolon, the compiler yells. 1 + 1 always equals 2.
Prompts are different—they are probabilistic. 1 + 1 probably equals 2, unless the context is "binary numbers," in which case it might be 10.
To get consistent results from a probabilistic engine, you need Design Patterns.
1. Zero-Shot vs. Few-Shot Prompting
Zero-Shot: Asking without examples.
- Prompt: "Write a Python function to calculate Fibonacci."
- Result: Generic code. Maybe uses recursion (slow), maybe loops (fast). You don't know.
Few-Shot: Providing input/output examples to define the style.
Convert natural language to SQL. Example 1: Input: "Show all users who joined yesterday." Output: SELECT * FROM users WHERE joined_at = DATE(NOW()) - 1; Example 2: Input: "Count orders by city." Output: SELECT city, COUNT(*) FROM orders GROUP BY city; Task: Input: "Find the top spending customer." Output:- Why it works: LLMs are pattern matchers. By providing examples, you constrain the solution space. The model sees "Oh, we are writing standard SQL, uppercase keywords, no partial matches."
2. Chain of Thought (CoT)
For complex logic, you must force the model to "show its work." If an LLM answers immediately, it's guessing.
The Trick: Append the phrase "Let's think step by step."
The Mechanism: Generating the "reasoning tokens" allows the model to compute intermediate states before committing to a final answer.
- Standard Prompt: "Sally has 3 apples. She buys 2 more. Then drops 1. How many?" -> "4". (Right).
- Complex Prompt: "Sally buys NVIDIA stock at $100. It goes up 50%. Then drops 50%. What is it worth?"
- Zero Shot: "$100" (Wrong - it's $75).
- CoT: "Start $100. Up 50% is $150. Down 50% of $150 is $75. Answer: $75." (Right).
3. The ReAct Pattern (Reason + Act)
This is the foundation of AI Agents. You structure the loop so the model thinks, acts, and observes.
- Prompt:
Question: What is the elevation of the capital of Colorado? Thought: Even though I might know this, I should verify. The capital is Denver. Action: Search [Elevation of Denver] Observation: 5,280 feet. Thought: That is exactly one mile. Answer: The elevation is 5,280 feet (1 mile).
4. Persona Adoption (The "System" Role)
Don't leave the persona undefined.
- Weak: "Write code."
- Strong: "You are a Staff Engineer at Google. You prioritize Readability over Premature Optimization. You use Typescript with strict mode. You prefer functional patterns."
This "primes" the model to access a specific subspace of its training data (high-quality engineering blogs vs. beginner tutorials).
5. The "Negative Constraints" Trick
LLMs are bad at "Don't do X." They often focus on X and do it anyway.
- Bad: "Don't be verbose."
- Good: "Be concise." (Positive constraint).
- Bad: "Don't use loops."
- Good: "Use recursion."
Mastering these patterns turns the LLM from a random text generator into a reliable function in your tech stack.
