Home/ Part XIII — Expert Mode: Systems, Agents, and Automation/37. Agentic Workflows (Without Letting It Run Wild)

37. Agentic Workflows (Without Letting It Run Wild)

Overview and links for this section of the guide.

What is an Agent?

An "Agent" is just an LLM in a loop with tools. That's it. No magic, no sentience—just a while loop.

// The core agent loop
while (!goalMet && stepsRemaining > 0) {
  const thought = await model.reason(context);
  const action = await model.decideAction(thought, availableTools);
  const result = await execute(action);
  context.append({ thought, action, result });
  stepsRemaining--;
}

The key difference from a single LLM call: the model decides what to do next based on what it observes. It's autonomous within boundaries.

Anatomy of an Agent

┌─────────────────────────────────────────────────────────────────┐
│                        AGENT ARCHITECTURE                        │
├─────────────────────────────────────────────────────────────────┤
│                                                                  │
│  ┌──────────────────────────────────────────────────────────┐   │
│  │                      AGENT LOOP                           │   │
│  │  ┌─────────┐   ┌─────────┐   ┌─────────┐   ┌─────────┐   │   │
│  │  │ OBSERVE │──▶│  THINK  │──▶│  ACT    │──▶│ UPDATE  │   │   │
│  │  └─────────┘   └─────────┘   └─────────┘   └─────────┘   │   │
│  │       ▲                                         │         │   │
│  │       └─────────────────────────────────────────┘         │   │
│  └──────────────────────────────────────────────────────────┘   │
│                              │                                   │
│              ┌───────────────┼───────────────┐                  │
│              ▼               ▼               ▼                  │
│         ┌─────────┐    ┌─────────┐    ┌─────────┐              │
│         │  TOOLS  │    │ MEMORY  │    │ POLICY  │              │
│         │         │    │         │    │         │              │
│         │ read()  │    │ context │    │ budget  │              │
│         │ write() │    │ history │    │ rules   │              │
│         │ search()│    │ state   │    │ gates   │              │
│         └─────────┘    └─────────┘    └─────────┘              │
│                                                                  │
└─────────────────────────────────────────────────────────────────┘
Component Purpose Example
Tools Actions the agent can take read_file, run_command, search_web
Memory Context accumulated during execution Scratchpad of observations and actions
Policy Rules that constrain behavior Max 10 steps, no file deletion

The Danger Zone

Agents are prone to failure modes that don't exist in simple chat:

Failure Mode Example Mitigation
Infinite Loops "I need to check the file" → "Checked" → "I need to check again" Step budget, loop detection
Goal Drift Asked to fix a typo, rewrites entire database layer Scope constraints, human checkpoints
Cost Explosion One query triggers 100 model calls Token/call budgets, early termination
Destructive Actions Agent decides to rm -rf to "clean up" Tool allowlists, sandboxing
Hallucinated Tools Agent calls a tool that doesn't exist Strict tool validation

Safe Patterns

We focus on Bounded Agents—agents with explicit limits:

// Bounded agent configuration
const agentConfig = {
  // Step budget - hard limit on iterations
  maxSteps: 10,
  
  // Token budget - prevent runaway costs
  maxTokens: 50000,
  
  // Time budget - prevent hanging
  maxDurationMs: 60000,
  
  // Tool restrictions
  allowedTools: ['read_file', 'search', 'write_draft'],
  blockedTools: ['delete_file', 'run_command', 'send_email'],
  
  // Human checkpoints
  requireApproval: ['write_file', 'create_pr'],
  
  // Scope constraints
  allowedPaths: ['src/**', 'tests/**'],
  blockedPaths: ['.env', 'secrets/**', 'node_modules/**'],
};

When to Use Agents

Use Agents When... Don't Use Agents When...
Task requires multiple steps with dependencies Single-shot question/answer
Information needed isn't known upfront All context available in prompt
Actions depend on intermediate results Straightforward transformation
Human oversight is built in Fully autonomous operation needed
Start Without Agents

Most LLM applications don't need agents. A well-structured single prompt with good context often outperforms a poorly-designed agent. Add agentic behavior only when you've hit clear limits.

Where to go next