Operating Cadence at Scale: What Actually Drives Alignment Across Architects, Programs, and Executive Leadership
A practical framework for engineering leaders on building cadence as an information system across architects, program managers, and executives.

Brandon Wilburn
April 27, 2026

Most leadership teams already have a cadence. What they often lack is a cadence that does anything.
If you graphed the average tech organization's calendar, you would find a tidy lattice of recurring meetings: weekly one-on-ones, biweekly portfolio reviews, monthly architecture forums, quarterly business reviews. The structure is impeccable. The output is anecdotal. Decisions still slip past steering committees. Architects still surface critical risks weeks after the program team needed them. Executives still discover problems by reading status emails on Sunday night.
The bug is not the meeting count. McKinsey's research found that 61 percent of executives say at least half the time they spend making decisions is ineffective, and that 80 percent were already changing meeting structure and cadence after the pandemic. Bain's analysis of 17 large corporations found one company spending roughly 300,000 person-hours a year supporting a single weekly executive committee meeting, the equivalent of about 150 full-time employees. Adding meetings to a system that already wastes most of the meetings you have is not a fix.
The fix is treating cadence as an information system, not a calendar.
This article walks through what that means at three altitudes: the architect, the program manager, and the senior executive. It includes the artifact I would actually hand a new leader on day one, with reasons for each section.
The principle: cadence is information flow, not events
Andy Grove, in High Output Management, drew a line between two kinds of meetings. Process-oriented meetings run on a regular rhythm and exist for structured information exchange. Mission-oriented meetings are called when a specific decision must be made. Grove argued that a manager's leverage came from the first kind, and that the second should be rare. Not because decisions aren't important, but because if the first kind is built well, most decisions get made inside the rhythm rather than at emergency stops.
The framing still holds because the question hasn't changed. A working cadence answers six things, repeatedly, at every layer of the organization (High Output Management, Vintage Books edition; the full text is also scanned at the Internet Archive):
- What is happening?
- Why does it matter?
- What's at risk?
- What decisions are needed?
- Who owns each one?
- When does each thing land?
If your weekly review doesn't produce clean answers to those six questions, you don't have a cadence problem. You have a writing problem, an ownership problem, or both.
The companies that have made cadence into a competitive advantage have all converged on a similar move. They shifted the load from speaking to writing. Bezos famously banned PowerPoint in Amazon S-team meetings in 2004 in favor of six-page narrative memos read in silence at the start of each meeting. His argument was specific: a good narrative memo forces logical connections that bullet points let you skip. Slides give the presenter permission to gloss; a narrative does not. By the time the room finishes reading, everyone has the same context, and the discussion can start where it should start, which is at the points of disagreement.
Stripe ended up in the same place from a different starting point. Patrick Collison's internal emails were known for footnotes and citations, and the company's default for project kickoffs and decisions became a written narrative rather than a deck. Pre-meeting memos were mandatory; retrospectives were written; teams maintained "living documents" rather than scattered Slack threads (more on the Stripe writing playbook). Brie Wolfson, a former Stripe employee who later wrote the canonical account, makes the point that the writing wasn't a stylistic preference. It was the operating system.
The lesson isn't "ban PowerPoint." It's that the meeting is the wrong unit of work. The artifact is the unit of work, and the meeting is the synchronization event. Most cadences fail because nobody has thought carefully about the artifact the meeting is supposed to be reading.
Layer 1: Architects, define the future and surface risk early
Architects sit at the intersection of long horizons and tight deadlines. They are accountable for systems that have to work in three years and for decisions that have to be made on Friday. When their cadence fails, the failure mode is almost always the same: a critical change gets made quietly, and by the time it reaches the people who needed to know, it is already in production.
The 737 MAX is the most expensive worked example available.
Boeing engineers originally designed MCAS, the flight control system later implicated in the Lion Air and Ethiopian Airlines crashes, with two activation triggers and a relatively narrow scope. About a third of the way through flight testing in 2016, the system was substantially redesigned. A single angle-of-attack sensor became the trigger. The system's authority expanded into low-speed flight regimes the original design had not contemplated. Boeing did not submit a revised system safety assessment to the FAA at the time of the change (Seattle Times investigation, via AFA-CWA; DOT Office of Inspector General report).
The OIG's later finding is the line that should be printed and pinned over every architecture review: "Communication gaps further hindered the effectiveness of the certification process." An international expert review (the Joint Authorities Technical Review) reached a similar conclusion, observing that information about MCAS had been delivered to disconnected groups in fragments, and that the FAA's ability to recognize the system's full implications was undermined as a result (Senator Duckworth's letter, citing the JATR review; Wikipedia summary of the groundings).
Two crashes, 346 deaths, and roughly $20 billion in losses later, the engineering critique converges on a single observation: the architectural change was real, but the cadence that should have surfaced and propagated it was not there (Henrico Dolfing's case study; Harvard Law School Forum on Corporate Governance). The technical failure is downstream of the communication failure.
This is the high-stakes version of a problem that exists in every engineering organization. Architects often work in ambiguity, and ambiguity is opaque to everyone outside it. If you cannot summarize what your architects are doing in two minutes, you have a visibility problem, and visibility problems become decision problems become safety problems.
What I want from a weekly architect artifact:
Active workstreams. Not a project list. A short paragraph per initiative that names the design phase (concept, validation, implementation), the systems it touches, and the teams it depends on. If a workstream has been "in validation" for six weeks, that is a question, not a status.
Scope and impact. Which systems are affected and what business outcome the work is meant to produce. Architects who cannot connect their work to revenue, scalability, or compliance tend to over-optimize for what they personally find interesting. This is human and normal. The artifact corrects it.
Risks, with severity and a mitigation plan. Including the unknowns that have not been validated yet. The MCAS lesson is not "Boeing engineers were reckless." It is that risks compound silently when the reporting line treats them as fragments.
Milestones with target dates. Architectural work is open-ended by default, and open-ended work always slips. Decision milestones, prototype dates, and integration readiness dates are not a cage. They are the rhythm that lets execution teams plan around you.
Decisions needed and actions required. Architects who diagnose without prescribing leave their leaders to invent the prescription, badly. The cadence should make it cheap to ask for a decision and expensive to leave one implicit.
A working template is at the end of this piece.
Layer 2: Program management, the connective tissue
Programs are where strategy meets the calendar. A program manager's job, at its core, is to make sure that what the architects designed and what the executives committed to actually ships on a date the business can plan around. When program cadence fails, the failure is also legible: status reports become green-shaded fiction, dependencies get discovered at integration time, and the org learns about a slip from a customer.
The most useful frame I have found for program cadence is the McKinsey distinction between three kinds of decisions inside an organization: big-bet decisions (rare and high-stakes), cross-functional decisions (frequent and coordinated), and delegated decisions (one person should make them, and if a meeting is needed at all, the meeting itself is the bug) (McKinsey, "To unlock better decision making, plan better meetings"). Programs run almost entirely on cross-functional decisions. The cadence has to make those decisions cheap.
Concretely, a program update should answer:
What is the current ship plan, and what changed since last week? Not a Gantt chart. A short narrative that names the milestones moving forward, the milestones at risk, and what would have to be true for the at-risk ones to recover. A program update where nothing is at risk is almost always one that has not done the work.
Where are dependencies blocked? With names. "Waiting on platform" is not a dependency. "Waiting on the auth team's federated identity API, which is gated on a security review scheduled for next Thursday" is a dependency. The first version produces follow-up meetings. The second produces a ping in a Slack thread.
What decisions does this team need from outside the team? Listed with the decision, the decider, and the date by which the decision blocks ship. This single change is the highest-leverage move I have seen in program cadence. It converts steering committees from status theater into decision factories.
What just shipped, and what did we learn? Programs that do not write down what they learned forget. Forgetting compounds. Two lines per shipped milestone, kept in a single living document, is worth roughly one consulting engagement per year.
The Bain figure is worth keeping in mind here. The 300,000 hours that a single weekly executive committee meeting consumed at one large company across preparation, ancillary meetings, and cascade was later reframed by Bain itself as the equivalent of about 150 full-time employees, or, in their later headline, 34 years of work for one weekly meeting. Programs sit immediately downstream of those meetings. They are where the cost is paid. A program cadence that produces decisions earns the cost. One that produces reassurance does not.
A short artifact for the program manager:
# Program Update — [Program], week of [Date]
## Ship plan
- Next milestone and target date
- What is at risk, and what would have to be true to recover
- What changed since last week
## Dependencies blocked
- [Team or system], [why], [unblock by date]
## Decisions needed from outside this team
- [Decision], [decider], [date by which it blocks ship]
## What shipped, what we learned
- [Milestone], one-line takeaway
The program update is shorter than the architect update by design. Most of the program manager's leverage is in routing other people's artifacts, not producing their own. If your program update runs longer than this, the program manager is doing work that should sit at the team level.
Layer 3: Senior leadership, strategy at a useful resolution
The trap at the executive layer is that everything looks important and almost nothing is concrete. Reports flatten as they travel. Dashboards roll up. The longer the reporting chain, the more sandpaper has been applied to the actual edges. By the time information reaches the VP, the surprises have been polished out.
Four patterns are worth borrowing.
The first is Amazon's six-pager. The structural insight is not that six pages is the right length. It is that the document forces logical connections that bullets allow you to skip. Bezos has described his ideal meeting as one that begins with thirty minutes of silent reading, on the grounds that the document is the meeting until the document has been read. Memos are written, rewritten, circulated for review, set aside, and edited again. Authors' names are not attached, because the memo is from the team. The discipline is the point.
The second is Grove's operations review. Grove distinguished between staff meetings (a manager and direct reports, working through their own issues) and operations reviews (employees several levels apart presenting and being questioned by senior leaders who have not seen the material in advance). Operations reviews are how a senior leader stays in touch with what is happening four levels down without falling into ad hoc fire drills. They are also expensive, and Grove was emphatic about how to run them: presenter prep time matters, the reviewing manager should ask questions rather than make speeches, and the meeting should produce decisions rather than reassurance (chapters 4 and 5 of High Output Management).
The third is the operating rhythm Brian Chesky has described in interviews, which is more controversial but worth taking seriously. Chesky runs reviews on tiered cadences: weekly, biweekly, monthly, and quarterly. Each area of the business is assigned to whichever rhythm fits how fast it actually changes. Customer service, which is relatively stable, lands on the quarterly cadence; product and marketing land higher up (transcribed in this Y Combinator-era talk summary; Fortune profile). The point is not that every leader should personally review accounting weekly. It is that the cadence should match the rate of change of what is being reviewed, and that one cadence for everything is roughly as useful as one shoe size for everyone. There is reasonable debate about whether Chesky's hands-on style scales beyond him. The underlying mechanic, matching review cadence to system volatility, is durable.
The fourth, and the one closest to the audience of this article, comes from Will Larson, who served as CTO at Carta and Calm and as an engineering leader at Stripe and Uber. In The Engineering Executive's Primer, Larson treats running meetings as a first-class executive skill rather than scaffolding around the real work. His framing is direct: good meetings are the heartbeat of a well-run engineering organization, and well-run organizations anchor around a small number of high-quality recurring meetings rather than an ever-expanding portfolio of them (Larson, "Meetings for an effective eng organization"). His specific recommendation for the executive's weekly or monthly execution review is worth pulling out: it should be cross-functional, not engineering-specific. Larson's reasoning is mechanical. Any single-function execution review tends to assign responsibility to functions that are not in the room, which produces narratives rather than decisions. The cross-functional version is the one where the people who can actually resolve the issue are present when the issue is raised. This is the same point Grove was making about operations reviews, restated in the language an engineering executive will recognize.
What I want from an executive review artifact:
The narrative. A two- to three-page memo, written by the team being reviewed, that lays out the state of the world, the bets being placed, the risks accumulating, and the decisions being requested. This is not a status update. Status updates are for the program layer. The narrative is where the team writes down what they think is true, and why.
The numbers. Specifically the numbers the leader could not infer from the narrative alone. Headline metric, leading indicators, and one trailing indicator that disconfirms the leading ones if the leading ones are lying. Pair effect with counter-effect, in Grove's framing.
The asks. Three at most. If a team has more than three asks for the executive layer, the team is delegating decisions upward that should be made closer to the work.
The risks worth waking up for. The point of an executive review is for the leader to know what could blow up before it blows up. The cost of a missed risk is asymmetric. Better to surface the unlikely bad outcome and have it not happen than the reverse.
A skeleton for the executive memo, kept deliberately short because the value is in the prose between the headers:
# [Area] Review — [Date]
## Situation
One paragraph. The state of the world in plain language, as the team sees it.
## Bets
What you are investing in and why. Three to five bullets at most.
## Risks worth waking up for
Three at most. Each with severity and what you are doing about it.
## Numbers
Headline metric, leading indicators, and one trailing indicator that
disconfirms the leading ones if they are lying.
## Asks
Three at most. Specific decisions you need from this review.
The skeleton is not the artifact. The prose between the headers is the artifact. A team that fills in the skeleton without writing the paragraphs is producing a status report, not a memo.
The cadence lattice
Each layer's artifact is useful on its own. The leverage compounds when they sequence correctly. The architect update has to land before the program review so program managers can read it before they write theirs. The program update has to land before the executive review so executives have both inputs in front of them. Decisions made at the executive review have to cascade back down through the same channels in reverse, so that the next round of architect and program updates can incorporate them.
The cycle length is a separate question. Some organizations run all three layers weekly. Others run architects weekly, programs biweekly, and executives monthly. A few run architect updates on the cadence of the work itself, which might be every two weeks for one initiative and every six for another. The lattice describes the order, not the period. What matters is that within whatever cycle each layer runs, the sequencing holds.
Three rules govern it.
Artifacts precede meetings. Every review has a written input that lands at least 24 hours before the meeting. If the artifact is late, the meeting is canceled or downgraded to a working session. The meeting is the synchronization event, not the production event.
Up the stack, then back down. Information flows from architect to program to executive over the first half of each cycle. Decisions flow back from executive to program to architect over the second half. A team waiting on a decision past the cascade window should escalate, not absorb the wait into the next cycle.
Cadence matches volatility. A weekly rhythm works at the architect and program layers when the work changes weekly. Executive reviews of stable areas can run biweekly or monthly. The mistake is forcing every layer to the same rhythm.
The flow, regardless of cycle length:
Loading diagram for mermaid-svg-9b47cd3be4c9...
For organizations running all three layers on a weekly rhythm, this is one way the days might fall:
| Day | Architect | Programs | Executives |
|---|---|---|---|
| Monday | Publish weekly update | Read architect updates | -- |
| Tuesday | -- | Publish program update | -- |
| Wednesday | -- | -- | Read both pre-reads |
| Thursday | Attend if invited | Attend | Operations review |
| Friday | Receive decisions | Receive decisions | Publish decisions |
The specific days are negotiable. So is the cycle length. The order is not. Architect input precedes program synthesis, program synthesis precedes executive review, and decisions cascade back within the cycle so the next round of artifacts can absorb them.
The most common failure of the lattice is the executive review where pre-reads have not been read. The artifacts arrive, the meeting starts, no one has done the reading, and the meeting collapses into a verbal status update. This is the failure Bezos was solving with silent reading at the start of every meeting, and it is worth being honest about what made his solution work. Bezos could mandate silent reading because he ran the meeting and the company. The same move from someone three layers down does not have the same effect.
If you run the review, you can adopt the silent-reading opening directly. If you do not, the realistic options are smaller and slower. You can write artifacts that are short enough that an unread pre-read still gets skimmed in the first two minutes of the meeting. You can lead with the asks section so the decisions surface even when the context did not. You can name, in the room, that a decision is being requested and that the pre-read covers the reasoning, which is a polite way of asking whether anyone read it. Over time, the pattern of decisions getting deferred because pre-reads went unread becomes its own evidence, and the person who runs the meeting either changes the practice or accepts the cost. Neither is your decision to make from below, but both become harder to ignore once the pattern is visible.
Scaling the lattice across a portfolio
The lattice as described handles one program well. A portfolio of twelve does not work the same way, for three reasons that compound.
Fan-in. A VP of engineering running twelve programs cannot consume twelve program updates and eight architect updates every cycle. The artifacts are individually well-written and collectively impossible to read. Without a synthesis layer between the per-program artifacts and the executive review, the executive either skims everything badly or asks for verbal summaries in the meeting, which is the failure the lattice was designed to prevent.
Fan-out. A decision made at the executive review for one program often affects three. "We are deferring the migration to Q3" is a single decision that has to land in every team that was depending on the migration. Without an explicit cascade mechanism, half the affected teams find out from rumor and the other half find out from a customer.
Heterogeneous cycles. A portfolio of twelve programs always has some moving fast, some moving slow, and some in maintenance. Forcing them all to the same cadence either drowns the executive in noise from the slow programs or starves them of signal from the fast ones.
The lattice generalizes by adding a synthesis layer and an explicit decision cascade.
The synthesis layer
The synthesis layer is the part most organizations get wrong. Someone has to read the architect updates and produce one architecture summary. Someone has to read the program updates and produce one portfolio update. The synthesizer is usually a chief architect, a head of program management, or a chief of staff, depending on how the org is structured.
Two failure modes are worth naming. The first is synthesis by committee, where the portfolio update is produced by a working group that smooths every claim into something nobody disagrees with. The result is a document with no edges. The second is synthesis as filtering, where the synthesizer decides what the executive is allowed to see, and the executive reads a sanitized version of the portfolio that no longer tells them what is actually at risk.
The fix is to write the synthesis as opinionated. The synthesizer is not a neutral aggregator. They are a reader. Their job is to read every input, form a view, and write that view down with their name on it. The portfolio update should say "the platform team's migration is at risk and I think the date will slip by four weeks" rather than "the platform team reports their migration is on track per the artifact." The first sentence is useful. The second is laundering.
This means the synthesizer needs to be senior enough to disagree with the artifacts they are reading. A chief of staff who cannot push back on a senior architect's update produces low-quality synthesis. The role is real management leverage, not administrative work.
Heterogeneous cycles
Programs do not move at uniform speed. A new product line in early development might warrant a weekly review. A mature product in maintenance mode might warrant a quarterly one. A migration program ramping toward cutover might shift from biweekly to weekly in its final stretch and then back to monthly afterward.
The lattice accommodates this by treating cadence as a property of the program, not the portfolio. The synthesis layer pulls from each program at whatever rhythm that program runs, and the portfolio review aggregates the most recent update from each. A program on a monthly cadence appears in the portfolio review with the same artifact for four cycles in a row, which is not a failure of the system. It is a feature. If the artifact has not changed in a month, the program has not changed in a month, and the executive does not need to spend time on it.
The trap is letting cadence drift downward without a forcing function. A program that quietly slips from weekly to biweekly to monthly to silent is the failure the synthesis layer is supposed to catch. The synthesizer should flag any program whose cadence has shifted and ask whether the shift reflects reality or neglect.
The portfolio review artifact
A skeleton:
# Portfolio Review — [Date]
## State of the portfolio
One paragraph from the synthesizer. What is working, what is at risk,
where attention is needed. Written with conviction, not consensus.
## Per-program summary
One row per program:
- Program, owner, current cycle, status (green/yellow/red)
- Most material change since last portfolio review
- Decisions pending from this review
## Cross-program risks
Risks that span more than one program: resource conflicts, shared
dependencies, sequencing issues.
## Cross-program decisions needed
Decisions that cannot be made within a single program. The
highest-leverage section. If empty, the synthesizer is not doing the job.
## Decisions cascading from the prior review
What was decided last cycle, where it has and has not landed.
The cross-program decisions section is the part most portfolios skip and the part that most justifies the executive's time. A program manager can resolve issues inside their program. The portfolio review exists to resolve issues across them.
Decision cascade at scale
The most underdeveloped piece of portfolio operating cadence is the decision cascade. The portfolio review produces decisions. Those decisions affect multiple programs. Without an explicit mechanism for landing them, decisions get lost in the gap between "the meeting is over" and "the affected teams know."
A working pattern: a portfolio decision log, maintained as a single living document, that every program subscribes to. Each entry has a date, a decision, the rationale, the affected programs, and the date by which each program confirms they have integrated the decision. Confirmation is mandatory. A program that has not confirmed is treated as a program that has not heard, and the synthesizer follows up.
This is administrative work. It is also where portfolios live or die. The cost of a decision made at the executive level but never landed in the affected programs is the cost of the decision plus the cost of the meeting that produced it plus the cost of the second meeting six weeks later, when someone discovers the decision was never integrated. A decision log closes the gap between the first cost and the second.
The flow at portfolio scale
Loading diagram for mermaid-svg-496006e5b2df...
The synthesis layer is the new component. The decision log is the new artifact. The single-program lattice still operates within each program. The portfolio version sits on top.
A worked example
Consider a product organization with three product lines, each with two to four active programs, and a horizontal architecture function with five architects. Roughly twenty artifacts produced per cycle, none of which a VP can read in full.
The architecture function publishes its five updates on whatever rhythm fits each architect's work. The chief architect reads all five and writes one architecture summary, due before the portfolio review. The summary names the two or three risks worth the executive's attention and explicitly omits the rest.
Each product line runs its own program review at whatever rhythm fits. The head of each product line writes one product-line summary, also due before the portfolio review. The portfolio review reads four artifacts (three product-line summaries plus the architecture summary), not twenty.
Decisions made at the portfolio review land in the decision log. Each affected program confirms within a defined window. Programs that have not confirmed by the deadline get a personal follow-up from the synthesizer.
The trap in this structure is treating the synthesis layer as a clerk. The chief architect and the heads of product lines are not aggregators. They are readers with judgment. Their summaries are opinionated, named, and signed. If the role is held by someone without the authority or skill to disagree with the artifacts they are reading, the synthesis is empty, and the lattice fails at scale exactly where it succeeded at single-program scale.
What goes wrong, and how to fix it
A few patterns I have watched fail repeatedly.
Cadence as theater. The meeting happens, the deck is presented, no decisions are made, and everyone goes back to their desks. The McKinsey number, that 61 percent of executives say at least half their decision-making time is ineffective, is a fingerprint of this pattern. The fix is to require an explicit decisions section in every artifact, and to treat its absence as a defect.
Status reports that do not say anything. "On track" is not information. It is a credential. A useful status report names what is at risk, not what is fine. If your reports are mostly green, you are either on the easiest project of the decade or you are managing perception.
Architects without dates. Architectural work without milestones drifts to the bottom of every priority list and then surprises everyone when the integration date arrives. Open-ended is fine for research. It is not fine for anything that other teams are depending on. Architects should be expected to commit to decision dates, even if the decision is "we need another two weeks of validation."
One cadence for everything. Weekly reviews of yearly trends are noise. Quarterly reviews of weekly-changing systems are negligence. Match cadence to volatility.
Calendar-first, artifact-never. If the meeting exists but the document does not, the meeting is the document, and the document is bad. The single highest-leverage change a leader can make is to refuse to hold a recurring review without a written artifact.
Synthesis as laundering. At portfolio scale, the synthesis layer either does the work or hides it. A portfolio update that says "all programs reporting on track" is doing neither reading nor writing. The fix is to require the synthesizer to put their name on the document and to write at least one paragraph that disagrees with one of the input artifacts. If the synthesizer cannot find anything to disagree with across twelve programs in a given cycle, they have not read carefully.
The architect cadence template, in full
The architect template carries more weight than the program update or the executive memo because architects do more of the original thinking that the other two layers route or synthesize. It is the artifact that, if filled in honestly every week, becomes the record of how the architect thinks. Below is the version I would actually hand a new architect on day one. The rule is that every section must be answerable in plain language. If a section becomes a list of links, the artifact is failing.
# Architect Weekly Update — [Name], week of [Date]
## State of play (one paragraph)
What changed this week, written for someone two layers up.
Not a list of activities. A summary of the situation.
## Active workstreams
For each:
- Name and one-line description
- Phase: concept | validation | implementation
- Systems and teams affected
- Single biggest dependency right now
## Scope and impact
- The business outcome this work is meant to produce
- The systems most affected
- Whether the work is on the critical path of any committed delivery
## Risks
For each:
- Description, in one sentence
- Severity: low | medium | high
- What you are doing about it, or what would need to be true to mitigate it
- What would have to happen for this to actually go wrong
## Milestones
For each:
- Decision or deliverable
- Target date
- Status: green | yellow | red, with one-line reason
## Decisions needed
For each:
- The decision
- Who you need it from
- The date by which the decision blocks downstream work
- Your recommendation, with reasoning
## What I learned this week
One or two sentences. Optional, but the architects who do this
consistently get materially better, and so do the people reading them.
The template is not the magic. The magic is that it gets filled in honestly, every week, by the same person, and that the people reading it actually read it. Over a few months, the artifact becomes a record of how the architect thinks, what they got right, and what they missed. That record is worth more than any single review.
Closing
Cadence is not a meeting culture. It is the discipline of writing down, on a rhythm that matches the rate of change, what is actually true about the work. Boeing's MCAS failure, Amazon's six-pager, Stripe's footnotes, Grove's operations reviews, Chesky's tiered reviews: they are all the same observation pointed in different directions. The organizations that ship at scale are the ones where the artifacts force the thinking, the rhythm forces the reading, and the meeting is the consequence rather than the cause.
If you only change one thing this week, refuse to hold a recurring review without a written artifact. Watch what happens to the meeting. Watch what happens to the work.
Tags
Affiliate Disclosure

About Brandon Wilburn
As a technology and business thought leader, Brandon Wilburn is currently the Chief Architect at Spirent Communications leading the Lifecycle Service Assurance business unit. He provides vision and drives the company's strategic initiates through customer and vendor engagements, value stream product deliveries, multi-national reorganization, cross-vertical engineering efficiencies, business development, and Innovation Lab creation.
Brandon works with CEOs, CTOs, GMs, R&D VPs, and other leaders to achieve successful business outcomes for multinational organizations in highly technical and challenging domains. He provides direct counsel to executives on markets, strategy, acquisitions, and execution.
With an effortless communication style that transcends engineering, technology, and marketing, Brandon is adept at engaging marquee customers, quickly building relationships, creating strategic alignment, and delivering customer value.
He has generated new multi-national R&D Innovation Lab organization from inception to scaled delivery, ultimately 70 resources strong with a 5mil annual budget, leveraging FTEs and consulting talent from United States, Canada, United Kingdom, Poland, Lithuania, Romania, Ukraine, Russia, and India all delivering new products together successfully. He directed and fostered the latest in best practices in organization structure, methodology, and engineering for products and platforms.
Brandon believes strongly in an organization's culture, organizing internal and external events such as Hackathons and Demo Days to support and propagate a positive the engineering community.