The thousand token rule is free for a limited time. Available also in epub and pdf formats.
← Details

The thousand token rule

This course is fully available for free. Sign in to access all chapters.

From what to how

Requirements tell you what the system should do. Technical planning decides how to build it. This is where you evaluate libraries, choose architectural approaches, design interfaces, map dependencies, and analyze failure modes. The goal is the same as requirements: constrain the solution space before writing code.

The difference is that technical planning can include implementation details. Requirements said "cache API responses for 5 minutes to reduce database load." Technical planning evaluates Redis versus memcached versus in-memory caching, chooses one, and documents why. That choice constrains design and implementation.

Evaluating dependencies

Adding a library to your project seems cheap. Install it, import it, use it. But dependencies have costs that compound over time: security vulnerabilities, breaking changes, maintenance burden, license constraints, and the risk that the maintainer abandons the project.

Before adding a dependency, check for known issues. Search the GitHub issues for the library. Filter by "bug" and sort by reactions. If the top issues are things like "critical security vulnerability unfixed for 6 months" or "memory leak in core functionality," you're looking at a maintenance problem. Check CVE databases for disclosed vulnerabilities. A library with a history of security issues will likely have more.

License compatibility matters more than most developers think. MIT and Apache 2.0 are permissive and cause few problems. GPL requires you to open-source your entire application if you distribute it, which rules it out for most commercial software. AGPL extends that to network use, meaning even running GPL code on your server triggers the requirement. Some licenses have patent clauses that affect how you can use the code. Read the license before you depend on the code.

Maintenance status tells you whether the library will still work a year from now. Check the last commit date. If it's been six months with no activity and there are open issues and pull requests, the maintainer might have moved on. Check response time to issues: how long between when someone reports a bug and when a maintainer responds? Bus factor matters too. If one person has made 95% of commits, that's a risk. They might get hit by a bus, change jobs, or lose interest.

The evaluation takes maybe 30 minutes and saves you from dependencies that cause problems later. Replacing a dependency after you've built on it costs substantially more than choosing carefully up front.

When to build instead of depend

If the functionality you need is simple and the available libraries are complex, unmaintained, or have problematic licenses, consider building it yourself. A 200-line caching implementation you control beats a 50,000-line library with security issues and an abandoned maintainer.

Architectural decisions

Requirements gave you constraints. Technical planning chooses an approach that satisfies those constraints. The choice often involves tradeoffs: performance versus complexity, flexibility versus simplicity, initial development time versus long-term maintenance cost.

Document the decision and the reasoning. Not in lengthy prose, but in enough detail that someone reading it six months from now understands what you chose and why. Architecture Decision Records work well for this: a short document per decision covering what you decided, what alternatives you considered, what tradeoffs you evaluated, and why you chose this option.

The format keeps you honest. If you can't articulate why you chose approach A over approach B, you don't understand the tradeoffs well enough. Writing it down forces clarity. It also gives future developers (or future AI sessions) the context they need to work with your decisions or change them intelligently when circumstances change.

Here's an example for a caching decision. Context: API responses are expensive to generate and mostly static. Decision: use Redis for caching with 5-minute TTL. Alternatives considered: in-memory caching (doesn't persist across deploys, doesn't share between instances), PostgreSQL with TTL column (adds load to the database we're trying to reduce), no caching (too slow). Tradeoffs: Redis adds operational complexity and another service to monitor, but provides the performance we need and works across multiple application instances. We chose Redis because we already run it for session storage, so operational complexity is minimal, and the performance gain is substantial.

That's 150 tokens. Writing it took five minutes. It explains the decision well enough that if someone later wants to replace Redis, they understand what problem it solves and can verify the replacement handles the same requirements.

Interface design in prose

Before you write code, describe the interfaces in prose. Not the full implementation, just the signatures, the contracts, and the error behavior. This sounds tedious, but it catches misalignments before they become code.

A user authentication module might expose three functions: authenticate takes an email and password, returns a token or error. validate_token takes a token, returns user claims or error. invalidate_token takes a token, returns success or error. Each function documents what inputs it accepts (types and constraints), what outputs it produces, what errors it can return, and what side effects it has.

The prose description forces you to think through the interface before you're committed to code. Does authenticate need to return just a token, or does it need to return user data too? Should validate_token throw an exception for invalid tokens, or return a result type that callers can handle? Does invalidate_token need to be idempotent? These questions are cheap to answer in prose. In code, they require refactoring after you've built things that depend on your initial choice.

AI helps here by generating interface proposals from requirements. Give it the requirements document and ask for interface signatures. It'll produce something reasonable. You critique it, identify what it missed or got wrong, refine the description. A few iterations gets you clear interfaces that you can implement against.

Dependency mapping

Components depend on other components. Authentication depends on database access and password hashing. The API layer depends on authentication and business logic. Business logic depends on data access. Draw these dependencies, even if it's just text: "API → Auth → Database, API → Business Logic → Data Access."

The map shows you what needs to exist before you can build what. You can't implement the API layer until authentication works. You can't implement authentication until database access works. This gives you implementation order and tells you where to start.

It also exposes circular dependencies before they're in code. If authentication depends on user data and user data depends on authentication, you have a cycle that will cause problems. Spotting it in the dependency map means you can restructure before implementation. Spotting it in code means refactoring working code.

The mapping artifact is simple. Components as nodes, dependencies as arrows. You can draw it in any tool, or just write it as text. The value isn't the artifact itself but the thinking it forces: what depends on what, and does that dependency structure make sense?

Failure mode analysis

For each component, think through how it can fail and what should happen when it does. The authentication service can fail if the database is unreachable, if the password hashing library crashes, if rate limiting is triggered, if the token is malformed. Each failure mode needs a specified response.

Database unreachable: retry with exponential backoff, return 503 after timeout. Password hashing crash: log error, return 500. Rate limiting triggered: return 429 with retry-after header. Token malformed: return 401. Specifying these responses during technical planning means they're consistent across the system. Every component handles database failures the same way. Every component returns 401 for authentication failures.

The alternative is letting each implementer decide. One returns 500 for database failures, another returns 503, a third retries indefinitely and hangs. Users get inconsistent error experiences. Debugging becomes harder because there's no systematic error handling.

Failure mode analysis takes maybe 20 minutes per major component. Walk through what can go wrong, decide how to handle it, document the decision. During implementation, these decisions are constraints that guide code. You don't have to think about error handling strategy while implementing, because the strategy is already decided.

When to stop planning

Technical planning is complete when you can hand the plan to an implementer and they could build the system without making architectural decisions. They'll still make implementation decisions (variable names, loop structure, specific algorithms), but the big choices are locked in: what libraries to use, how components connect, what interfaces look like, how errors propagate.

The test: read your technical plan and ask whether a competent developer could implement from it without coming back to ask "should I use approach A or B?" If they'd need to ask, you haven't finished planning.

Too much planning is also possible. If you've specified variable names and loop structures, you've crossed into implementation. The boundary is: planning specifies architecture and interfaces, implementation fills in the details. Keep planning at the right level of abstraction.

The artifacts from technical planning flow into design. You've chosen libraries, sketched component relationships, specified interfaces, and documented failure handling. Design turns those specifications into type signatures and stub code. The constraints keep accumulating, making each subsequent phase more mechanical.