The Template Collision

Two perfectly good templates meet on the same deal. Neither is wrong. But someone redlines anyway. That's Category 3... and it's where most negotiation drag actually lives.

The Template Collision

Field Notes from the Negotiation Lab - 4.4

March 26, 2026

Last week we laid out the four categories of redlines.

Must-haves. Judgment calls. Template collisions. Cosmetic noise.

If you saved that post, you probably already recognized the pattern in your own work. A lot of you told me as much.

But here's the thing about frameworks: they're easy to nod along with and hard to actually apply.

So this week, I want to focus on the category that does the most damage in practice.

Not Category 1. Those are rare and obvious.

Not Category 4. Annoying, sure. But marginal. Just friggin’ behave yourself.

Let’s dive into Category 3.

The template collision.

This is where most negotiation drag actually lives.

Twenty Ways to Say the Same Thing

Here's something nobody in legal wants to admit out loud.

I can write the same clause twenty different ways.

Different structure. Different phrasing. Different clause architecture. Different length. All of them substantively identical.

You could put all twenty in front of a judge and get the same result.

But they don't look the same. And in contract negotiation, appearance often drives behavior.

When your version of a clause meets my version, something happens almost reflexively. The receiving lawyer sees unfamiliar language and flags it. Not because the substance is wrong. Because the content is different.

And once it's flagged, it gets redlined.

Another round. Another review cycle. Another week.

Nobody's risk position moved. But the deal just got slower.

This is Category 3 in its purest form. Two templates colliding. Two sets of perfectly reasonable language fighting for the same paragraph. Neither wrong. Both generating friction.

A Flashback Worth Revisiting

Way back in Entry 1.1 of the Redliner’s Diaries, I shared an email from a procurement professional that perfectly captured how far we've drifted from common sense. It's worth revisiting through the lens of what we now know about Category 3.

The key line: "My review process involves a thorough, section-by-section and word-by-word analysis, during which I will integrate our own form terms."

Read that again.

"Integrate our own form terms."

Section by section. Word by word. Fourteen to thirty days to process. Go back and look at the email in 1.1… I can’t make that shit up.

That's not risk analysis. That's not a judgment call about where the incoming contract falls short.

That's a template replacement exercise described, in writing, as a professional methodology.

Every clause in the incoming draft gets measured against their form. Not against what the clause actually does. Not against the risk it creates or fails to address. Against their preferred language.

Where the language matches, it survives. Where it doesn't, it gets overwritten. Regardless of whether the substance changes.

That is Category 3 operating at industrial scale.

And the most telling part? The email closed with a suggestion that it might be faster to just use their form agreement as the base document instead.

Not because our terms were deficient. Because their process requires their language. Full stop.

Fourteen to thirty days to steer us toward "no." Not because anyone was trying to kill the deal. Because the process had become the point.

When I first shared that email, I framed it as an example of redline theater. Now, many entries later, I think it illustrates something more specific: what happens when template application replaces legal judgment. When "review" means "replace with ours" and nobody stops to ask whether the original version already accomplished the same thing.

The Lazy Move vs. the Disciplined Move

When a lawyer encounters a clause that's written differently from their template, there are two possible responses.

The lazy move: swap in your version.

It's faster. It's easier. It doesn't require you to actually read and analyze what the other side wrote. You already have your clause library open. It’s been “approved.” You copy, you paste, you move on.

And you just manufactured a redline that didn't need to exist.

The disciplined move is harder. You read their version. You compare it to your position... not your language, your position. You ask: does this clause accomplish the same thing mine does? Is the delta between their version and mine substantive or preferential?

If it's preferential, you leave it alone. You move on. You save everyone a round.

That second move requires three things most people either don't have or don't exercise.

First, enough experience to distinguish preference from substance. You have to actually understand what the clause does... not just what it looks like.

Second, enough confidence to leave someone else's language in the document. This is harder than it sounds. Lawyers are trained to improve. Leaving a clause untouched when you could "make it better" feels like you're not doing your job.

Third, enough discipline to resist the instinct to perform. This ties directly back to what we talked about in the early entries... the ego problem. The compulsion to mark something up just to prove you were here.

When all three are present, Category 3 edits get caught before they become redlines.

When any one of them is missing, the template collision happens. And it happens over and over and over.

Who's Holding the Pen?

Here's where it gets structural.

The people most likely to make the lazy move are the ones least equipped to make the judgment call.

A junior lawyer sees unfamiliar language and defaults to what they know: the template. They don't yet have the experience to read a clause and recognize it as functionally equivalent to their own. So they swap. Every time.

A procurement team with a mandate to apply their standard form does exactly that. Clause by clause. Regardless of whether the incoming language already achieves the same outcome. We saw that email. That's the process working as designed.

And then there's AI.

Let's be fair. The current generation of contract review tools isn't just doing blind template matching. Many of them have real analytical capability. They can parse a clause, assess risk levels, flag genuine gaps. That's progress, and it's worth acknowledging.

But here's what they still don’t do well: context.

They don't know this is a $15,000 deal that needs to close by month-end. They don't know the customer is a strategic account you've been cultivating for two years. They don't know that your CEO promised the board six new logos this quarter and this is number five.

Without that context, the tool does what it's built to do. It finds differences. It flags them. It suggests changes. And because it can do this at scale, effortlessly, the volume of suggested edits goes up. Way up.

In my experience, we're seeing more redlines on more minor issues than ever before. Not because the tools are wrong, exactly. Because it's so easy. The cost of auto-generating a markup has dropped significantly. But the cost of responding to one hasn't changed to match it.

Every AI-suggested edit still requires the other side to review it, evaluate it, and decide whether to accept, reject, or counter. That process takes time. And effort. And money.

The tools will get better. They're getting better. But until they can weigh a preferential variation against deal economics, urgency, and relationship dynamics... they're accelerating the wrong part of the process.

Why the System Rewards the Wrong Behavior

You might read all of this and think: the fix is better training. Teach lawyers to recognize "close enough." Teach them when to leave a clause alone.

And sure, that helps. In the same way that telling people to eat better helps with public health. Technically true. Practically limited.

Because the system as it's built rewards the wrong behavior.

Law firms bill hours. More redlines means more hours. The incentive doesn't point toward restraint.

In-house teams measure thoroughness. A markup that comes back untouched raises eyebrows. "Did you actually review this?"

AI tools are built to find differences. They're getting better at assessing whether differences matter. But the default output is still a markup, and markups create rounds.

Template libraries exist to standardize language. So when language doesn't match, the library becomes the weapon.

Every piece of the infrastructure pushes toward more edits. The discipline to not edit... that's purely personal. And personal discipline doesn't scale.

The Cost Nobody's Tracking

We've talked about velocity. Rounds. Time. Money.

But here's what Category 3 actually costs that doesn't show up on any ledger.

The relationship.

Every unnecessary template swap sends a signal the sender doesn't intend. When you overwrite a clause that was functionally fine, the other side doesn't hear "preferential improvement." They hear: "Your language isn't good enough." Or worse: "I don't trust your drafting."

One swap, maybe that's just thoroughness. Five swaps across ten clauses and the tone shifts. The other side starts reading defensively. Shoulders tighten. Responses get sharper. What should have been a collaborative process starts feeling adversarial.

And it compounds.

By the time you've exchanged three rounds of Category 3 edits, both sides have spent weeks signaling distrust over provisions that say the same thing. The relationship hasn't even started yet and it's already strained.

I've seen deals close after this kind of process. They close tired. The business teams are exhausted. The goodwill that should have been the foundation of a working partnership got ground down in the markup. And nine months later, when something goes sideways and the parties need to work together to solve it, that residual friction shows up. People don't forget how the negotiation felt.

Category 3 isn't just drag. It's corrosive.

And here's the thing about AI in this context that I think gets missed. The tools can get smarter. They can get better at distinguishing preference from substance. They can learn to weigh deal economics and flag only what genuinely matters.

But they'll still be operating inside the same paradigm.

Closed hand of poker. Your draft versus mine. Redlines exchanged across the table like opposing briefs.

No matter how intelligent the tool becomes, if the structure of the interaction is adversarial, the relationship cost remains. You're still firing markups at each other. You're still signaling positions through edits instead of conversations. You're still playing the same game... it’s just the cards get shuffled and dealt faster.

What changes the relationship isn't a smarter tool inside the old process. It's a different process entirely. One where both sides can look at the same structure together. Where the opening move isn't "here's my redline" but "let's look at this together and decide where we actually disagree."

That's a fundamentally different signal. And it's one that no amount of AI sophistication inside the traditional redline exchange can replicate.

What Actually Needs to Change

The instinct to standardize is right. Having a playbook, having positions, knowing what your company cares about and where it flexes... that's essential.

But standardization focused on language will always collide with someone else's equally valid standardization of language.

Twenty versions of the same clause. All correct. All different. All generating friction when they meet.

The real leverage comes from standardizing on positions, not prose.

If you know where you stand on an issue... what you require, what you prefer, and where you're flexible... the specific words matter a lot less.

Because now you're reading the other side's clause and asking the right question.

Not: "Does this match my template?"

But: "Does this achieve my position?"

That's the shift. And it's the difference between a negotiation that generates twelve rounds and one that generates two.

Wrapping It Up

Category 3 is where deals quietly bleed time and trust.

Not through bad faith. Not through incompetence. Through the accumulated friction of two sets of reasonable language colliding on every clause, in every deal, generating redlines that move no one's risk position but erode everyone's goodwill.

The fix isn't better templates. It's better judgment about when to leave the other side's template alone. And beyond that, it's a structure that doesn't force both parties into an adversarial exchange in the first place.

That judgment is rare. It's hard to teach. And the current system actively discourages it.

Which means for most deals, the template collision will keep happening. Same fight. Different logos.

Until someone changes the game.

Join the discussion on LinkedIn.

Subscribe for more information.