As I run WindBorne, I often analogize the way I and the company operate to how a computer processor works. The principles of computing are the most correct way to think about just about anything, after all.

An important concept from computing is branch prediction, and you can't run an efficient information machine without it.

Branch prediction is something that a CPU does when it encounters a branch in the code. A branch is a point where execution could go one of two ways, like an if statement. For example:

input = request.get_input()
input = input.trim()
input = input.lowercase()

if (input.is_empty()) {
    return error("input required")
}

validated = schema.validate(input)
result = database.query(validated)
return format_response(result)

The if (input.is_empty()) is a branch in the program. Execution may go one way or another, depending on the outcome of the branch.

The problem this poses for a processor is that execution is highly pipelined. Pipelining (another useful concept) is when a CPU overlaps multiple instructions at different stages of completion, like an assembly line. While one instruction is being executed, the next is being decoded, and the one after that is being fetched from memory. This keeps all parts of the processor busy and dramatically increases throughput. But it means the CPU needs to know what's coming next.

So, in order to efficiently keep the pipeline moving, even though there is a branch up ahead, we need to predict which branch we are most likely to go down and keep pipelining that so we don't slow execution. Often in computer programming, one branch is far less common than the other because it's just catching some edge case, as in the example above. If the branch prediction is wrong, the cost is low: the CPU just flushes the pipeline (discards the speculatively executed instructions) and starts over down the correct path. This is called a branch misprediction, and modern CPUs track misprediction rates and dynamically adjust their prediction strategies to minimize them.

In an organization, this sort of thing happens all the time, especially when it comes to one person approving another person's actions. At a sub-100-person company, I think the CEO should approve every new position that opens up. But people can branch predict: if they think I'm going to approve it, they can start the job spec and even post it before I say yes. And if we mispredict, it's not the end of the world, just take the post down. Other examples:

Spend approval is another form of branch prediction, but one built on trust. Rather than requiring employees to get approval before every purchase, you let people spend up to a card limit, and have accounting review after the fact. You're predicting that most spending decisions will be reasonable, and the cost of a misprediction (an occasional bad purchase that gets flagged) is far lower than the latency cost of requiring pre-approval for everything. This only works if you hire people with good judgment that you trust, but you will never build an efficient org without trust.

The lens of latency minimization is useful for everything. Scheduling a meeting to get a final decision on something is high latency, while a quick direct message for a vibes-based gut check can be very fast. So you can branch predict off a quick message and confirm later. Branch prediction mildly increases the overall work that needs to get done because of when it fails, but it greatly increases execution speed and reduces latency. Always branch predict when you can.