Tag: executive deck building

08 May 2026

Copilot Agent Mode for Executive Presentations: Three Workflows That Save Senior Leaders Four Hours

Quick answer: Copilot Agent Mode is most useful to senior leaders when it runs multi-step jobs end to end — not single-prompt slide generation. The three workflows that consistently move a four-hour executive deck job to twenty minutes are the source-document compression workflow, the strategic narrative draft workflow, and the objection-mapped Q&A pre-mortem. Each one chains research, structuring, and drafting into a single instruction set the agent executes while you do other work.

Henrik runs strategy at a mid-cap European insurer. Last quarter he was asked to present a market-entry analysis to the executive committee with three days’ notice. The full input pile was eighty-four pages — a McKinsey scoping memo, an internal pricing model, two regulatory briefings, and the previous quarter’s competitive review. He spent the first day reading. He spent the second day building outline drafts in Word. He spent the third evening assembling slides at home, having already missed a parents’ evening for his daughter. The deck went well. The process broke him.

Three months later he was asked for a similar piece on a different market. This time he opened Copilot Agent Mode at 09:00, fed it the source documents, gave it a single multi-step instruction, and stepped away for forty minutes. By the time he came back, the agent had produced a structured narrative outline, a draft of the headline slide for each section, and a Q&A preparation document anticipating the eight most likely committee objections. The full deck still required Henrik’s editorial judgement. But the four hours of preparation work that used to crush his evenings was now a twenty-minute review of agent output before lunch.

The difference between the two experiences was not better prompting. It was a different mode of using AI. Single-prompt Copilot — the chat box approach — produces one output for one input. Agent Mode chains research, structuring, drafting, and review into a single autonomous run. For senior leaders who are time-poor and judgement-rich, this is a structurally different tool, and the workflows that suit it are not the workflows you would use in chat.

Looking for the structured framework for using AI in executive presentation work?

The AI-Enhanced Presentation Mastery course is the self-paced framework for senior professionals using AI to build executive-grade presentations. Eight modules, eighty-three lessons, monthly cohort enrolment, two optional recorded coaching sessions.

Explore the AI-Enhanced Programme →

Agent Mode versus single-prompt Copilot

The mental model most senior leaders carry from earlier ChatGPT use is single-prompt: you ask, the model answers, you adjust, you ask again. That mental model is what makes Copilot feel like a slow assistant. You spend more time prompting than you save in output. The work is choppy. Context evaporates between turns. By prompt twelve you are repeating yourself.

Agent Mode reverses the structure. Instead of one prompt at a time, you give the agent an instruction with multiple sub-steps, a defined output, and access to source documents or tools. The agent then runs the steps in sequence, calling tools as needed, and returns the completed work product. You review and edit. You do not iterate prompt by prompt.

The shift is from “AI as conversation partner” to “AI as task-running junior analyst”. For executive presentation work — where the inputs are messy, the structure is established, and the output needs to look like senior thinking — the second model is materially more useful. Three workflows in particular consistently take a four-hour preparation job to twenty minutes of editorial review.

Comparison infographic showing single-prompt Copilot versus Agent Mode for executive presentations across four dimensions: input type, output style, presenter time required, and best-use scenario

Workflow one: source-document compression

The first workflow exists because senior leaders are routinely asked to present material they did not write themselves. A scoping memo from the strategy team. Two analyst reports. A regulatory briefing. A pricing model. The job is not to summarise — it is to produce a ten-minute executive narrative from eighty pages of mixed-format source material.

The agent instruction has four parts. First, the document set: attach or reference all source files in one batch. Second, the output specification: a structured outline with no more than seven top-level sections, each section limited to forty words, each section flagged for the source it draws from. Third, the constraint set: highlight contradictions between sources rather than papering over them; flag any claim where the underlying evidence is one analyst’s opinion rather than a verifiable data point. Fourth, the audience frame: write the outline for an executive committee whose first question will be “what is the decision you want from us, and what could go wrong?”

What the agent returns is not a finished deck. It is a working outline that has done the synthesis work — the part that costs the most time and the least intellectual originality. You read the outline. You disagree with two sections. You rewrite one and reorder another. The total editorial pass takes fifteen to twenty minutes. The synthesis work that would have taken three hours of reading and outlining is already done.

The reason this workflow saves so much time is that the agent reads at machine speed and synthesises across documents simultaneously. A human presenter reads sequentially, holds context in working memory, and synthesises last. The agent does the reverse. Neither is “better thinking” — they are different cognitive shapes. For source-heavy executive briefs where the synthesis is mechanical and the judgement is editorial, the agent’s shape is faster.

Workflow two: strategic narrative draft

The second workflow takes the compressed outline and produces a slide-by-slide narrative draft. This is the step where most single-prompt Copilot use falls apart, because slide generation in chat tends to produce either generic structures (problem-solution-benefit, repeated indefinitely) or slides that look polished but say nothing.

The agent instruction is more directive. Specify the narrative arc: situation, complication, resolution, decision, risk. Specify the section count and the exact role of each section. Specify the slide format: one headline statement per slide, no more than three supporting bullets, no jargon that has not been defined in the preceding section. Most importantly, specify the headline syntax explicitly — “the headline of every slide must be a complete sentence that states a finding, not a topic. ‘Three regions account for sixty per cent of the addressable market’ is a finding. ‘Market analysis’ is a topic.”

The agent will then produce a draft that respects the narrative architecture. The draft will not be final-quality. The headlines will need sharpening. Some slides will read as if the agent did not fully understand a niche term. But the structural work — sequencing the argument, allocating points to slides, drafting the supporting bullets — is done. Your job becomes editorial: tightening twelve headlines and reorganising two sections, instead of building thirty slides from a blank page.

Two specific instructions tend to lift output quality dramatically. The first is “include a ‘so what’ line at the bottom of every slide that states the implication for the executive committee in one sentence.” The second is “after each section, draft a transition sentence that links the closing point of the previous section to the opening point of the next.” Both are simple to specify. Both are work the agent does well. Both are work that human presenters routinely skip when time-pressed, leaving decks with strong individual slides and weak overall flow. Senior professionals using AI well are getting more value from structured prompt patterns like these than from any single dramatic prompt.

Roadmap infographic of the three Copilot Agent Mode workflows for executive presentations: source-document compression, strategic narrative draft, and Q&A pre-mortem, with the editorial pass that ties them together

The complete framework for AI-assisted executive presentations

Move beyond basic AI usage. The AI-Enhanced Presentation Mastery course gives you eight self-paced modules and eighty-three lessons on using AI (including Copilot) to structure, draft, and refine presentations that work at senior levels. Two optional recorded coaching sessions. £499, lifetime access to materials.

  • 8 modules, 83 lessons of self-paced course content
  • 2 optional live coaching sessions, fully recorded — watch back anytime
  • No deadlines, no mandatory session attendance
  • New cohort opens every month — enrol whenever suits you
  • Lifetime access to all course materials

Explore the AI-Enhanced Programme →

Designed for senior professionals using AI to produce executive-grade output, not generic drafts.

Workflow three: objection-mapped Q&A pre-mortem

The third workflow is the one most presenters have never tried, and the one that produces the highest leverage when the deck reaches the room. The agent’s job here is to read the draft deck, model the executive committee’s likely concerns, and produce a structured Q&A preparation document that anticipates the eight most likely objections with draft responses.

The agent instruction names the audience explicitly: not “executives” but the actual committee. “The committee includes a CFO whose previous term included a major write-down on a similar acquisition; a CEO whose stated priority for the year is operational simplification; a Chief Risk Officer who has flagged regulatory complexity in three of the last four committee meetings.” That degree of specificity changes what the agent flags. Generic objections give generic responses. Named-stakeholder objections give responses you can actually rehearse.

The output specification asks for three things per objection. The likely phrasing — how the objection will actually be stated in the room. The structural weakness it exposes — what the proposal genuinely does not yet answer. The draft response — a two-sentence reply that acknowledges the concern, names the specific evidence in the deck that addresses it, and offers a follow-up commitment if the evidence is incomplete. This is not the same as an FAQ section in the appendix. It is preparation work for live performance.

What you get back is a document that surfaces holes in the proposal you would not otherwise have noticed before the meeting. Nine times out of ten, at least one of the agent’s anticipated objections turns out to be a real gap that needs addressing in the deck before presenting. The agent does not have committee context the way you do. But it does notice gaps with a different cognitive bias than your own — and that complementary bias is where the value lies.

The editorial pass that turns agent output into executive output

None of these workflows produce final-quality executive material on their own. The agent produces structured first drafts. The editorial pass — the human judgement applied to that draft — is what produces senior output. This is the part that nervous AI users skip and that experienced AI users obsess over.

Five things matter in the editorial pass. First, the headlines. Re-read every slide headline aloud and rewrite any that state a topic rather than a finding. The agent will get this right perhaps seventy per cent of the time. The other thirty per cent are where decks lose authority. Second, the numbers. Verify every quantitative claim against the source document. Agents hallucinate numbers, especially in compression workflows. Third, the section flow. Does the argument land harder by the end, or does it dissipate? If it dissipates, reorder. Fourth, the language register. Replace any phrasing that sounds like a generic AI tone — “leveraging synergies”, “in today’s dynamic landscape” — with the language your committee actually uses. Fifth, the omissions. What does the deck not say that you, as the human in the room, know matters? The agent does not have your situational awareness. You do.

If you want the structured patterns for each of these editorial moves — the headline rewrite framework, the number-verification checklist, the language-register adjustments — the AI-Enhanced Presentation Mastery course walks through them across eight modules, with worked examples for board, investment committee, and steering committee scenarios.

Need the prompt library to run these workflows tomorrow?

The Executive Prompt Pack — £19.99, instant access — gives you 71 ChatGPT and Copilot prompts designed for PowerPoint presentation work. Includes prompt patterns for source compression, slide drafting, and headline sharpening that work in both chat and Agent Mode.

Get the Executive Prompt Pack →

FAQ

Is Copilot Agent Mode different from regular Copilot in PowerPoint?

Yes. Regular Copilot in PowerPoint generates slides one prompt at a time within the application. Agent Mode runs multi-step tasks autonomously — reading source documents, structuring an outline, drafting headlines, anticipating objections — in a single instruction set, and returns the work product after a sequence of steps it has chosen and executed. For executive presentation work where the inputs are large and the steps are predictable, Agent Mode saves materially more time than chat-style prompting.

How long does an Agent Mode workflow actually take?

Each of the three workflows in this article takes between fifteen and forty minutes of agent runtime, depending on the size of the source documents. The presenter is not active during that time — the agent runs while you do other work. The presenter’s active time is the editorial pass at the end, which usually takes fifteen to twenty-five minutes per workflow. Total senior-leader time per workflow tends to be twenty to thirty minutes, replacing what was often two to four hours of manual preparation.

Will Agent Mode hallucinate numbers from my source documents?

It can, particularly in compression workflows where the agent restates figures from longer source material. Treat every quantitative claim in agent output as a flag for verification, not a finished statement. Build the verification step into your editorial pass: open the source, locate the figure, confirm the agent’s restatement is accurate. The time cost is small. The credibility cost of presenting a hallucinated number to an executive committee is large.

Can Agent Mode replace a junior analyst?

For specific tasks within the presentation workflow, it can replicate the work an analyst would have done in synthesis and first-draft slide generation. It cannot replace judgement, situational awareness, stakeholder knowledge, or the editorial decisions that turn a draft into a senior-level deck. The most useful framing is that Agent Mode is a tireless drafting partner whose work always needs senior review — not a substitute for the senior thinking that makes the deck land.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Executive Presentation Checklist — a single-page review you can run on any AI-assisted draft before sending it to a senior audience.

Next step: pick the next executive deck on your calendar that has source material attached, and run the source-document compression workflow on it before you do anything else. Allow yourself thirty minutes for the agent to work and twenty minutes for editorial review. Compare that to your usual preparation time. The gap is the value of switching from chat-style prompting to Agent Mode for this kind of work.

Related reading: Copilot Agent Mode executive deck workflow — the five-step structure, and why AI-generated slides look generic and how to fix the editorial pass.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.

08 May 2026

AI-Generated Slides That Get Approved: The Human Editing Pass Board Members Cannot See

Quick answer: AI-generated slides that get board approval share one feature: a structured editorial pass on top of the AI draft. Boards reject AI output that has been left raw because it reads as anonymous, generic, and unanchored to the company’s specific situation. The editorial pass — six moves, applied in order — converts a generic draft into a deck that sounds like it came from a senior insider. The board never sees the AI underneath. They see a presenter who knows the business.

Rafaela had used Copilot to draft the strategy refresh deck. Twenty-eight slides, generated in eleven minutes, looking polished and structured. She sent it to her chief of staff for a sanity check the day before the board meeting. The chief of staff replied with a single sentence: “This reads like it could have come from any of our competitors.” Rafaela read the deck again with that comment in mind. The chief of staff was right. Every slide was technically correct. Every slide was anonymous. There was nothing in it that said this was their company, their numbers, their situation.

She had two choices. Present the deck as-is and trust that the board would forgive the generic feel because the underlying logic was sound. Or stay up that night doing the editorial pass that would convert the deck from a Copilot draft into something that sounded like senior thinking from inside the business. She chose the second. She also resented the third hour of editing, because the whole point of using AI had been to save time. But by midnight she had a deck that was unmistakably hers — and the board approved the strategy refresh the next morning without the kind of friction that usually attaches to AI-flavoured material.

The editorial pass she applied that night is not difficult. It is six specific moves, applied in a fixed order. Most senior presenters who use AI for deck drafting either skip the pass entirely (and present generic decks that get probed harder than they should be) or do parts of it ad hoc (and miss the moves that matter most). The pass is what turns AI-generated slides into board-approved slides. The board does not see the AI underneath. They see a presenter who knows the business cold.

Looking for the structured framework for executive-grade AI-assisted presentations?

The AI-Enhanced Presentation Mastery course is the self-paced framework for senior professionals using AI to build presentations that work at board level. Eight modules, eighty-three lessons, monthly cohort enrolment, two optional recorded coaching sessions.

Explore the AI-Enhanced Programme →

Why boards reject raw AI-generated decks

Boards do not reject AI output because they detect AI specifically. They reject it because the same patterns AI produces — generic phrasing, evenly weighted bullets, no anchored evidence, no clear decision ask — are the patterns of presentations that historically came from junior staff or external consultants who did not understand the business. Boards have learned to push back hard on those patterns, regardless of who produced them. AI just makes those patterns appear more often, and faster, in decks that should be sharper.

Three signals trigger board scepticism almost immediately. The first is anonymous language. “Leveraging operational efficiencies to drive sustainable growth” could describe any company in any sector. The second is unanchored claims. A bullet that says “the market is shifting toward platform-based solutions” without a citation, an internal data point, or a named competitor reads as filler. The third is structural symmetry that is too clean. Three points per slide, three sub-bullets per point, three slides per section — the architecture itself signals that no human did the messy work of weighting what matters.

The editorial pass exists to remove all three signals. It does not require rewriting from scratch. It requires applying six moves in sequence. Each move targets one of the patterns boards reject. Done in order, the pass takes about ninety minutes for a thirty-slide deck. Done out of order, or partially, it takes longer and produces inconsistent results.

Stacked cards infographic showing the six moves of the editorial pass for AI-generated executive slides: rewrite headlines as findings, anchor claims to evidence, replace generic language with insider phrasing, cut completeness slides, install the decision sentence, and read aloud against board reaction

Move one: rewrite the headlines as findings

The first move targets the highest-leverage element on every slide: the headline. AI-generated decks tend to produce topic headlines — “Market Analysis”, “Competitive Landscape”, “Financial Performance” — because the prompt history that trained the underlying models contained mostly topic-style headlines from corporate templates. Topic headlines tell the audience what the slide is about. They do not tell the audience what to conclude. Board members do not read decks for topics. They read for findings.

Rewrite every headline as a complete sentence that states the conclusion of the slide. “Market Analysis” becomes “Three of our six target markets show declining willingness to pay for premium service tiers”. “Competitive Landscape” becomes “Two new entrants in the last quarter have undercut our pricing by twenty per cent without matching our service standard”. “Financial Performance” becomes “Revenue is on plan; gross margin is below plan by three points, driven by raw material cost inflation”.

The discipline is to make every headline answer the implicit question “what should I take away from this slide?” If the headline does not answer that question, the slide will not land. This single move usually accounts for more than half of the perceived improvement in a deck. Boards lean forward when headlines are findings. They glaze when headlines are topics.

Move two: anchor every claim to specific evidence

AI drafts will routinely produce claims without supporting evidence. “The market is consolidating.” “Customer expectations are evolving.” “Regulatory pressure is increasing.” None of these are wrong. All of them are unactionable without evidence. The second move is to read every bullet and ask one question: “What is the specific evidence behind this claim?” Then add the evidence to the bullet.

“The market is consolidating” becomes “Two of our top five competitors merged in Q3, reducing the active competitive set from twelve players to ten”. “Customer expectations are evolving” becomes “Our latest customer survey shows seventy per cent now expect same-day issue resolution, up from forty-five per cent two years ago”. “Regulatory pressure is increasing” becomes “The FCA’s new operational resilience framework, effective March, requires evidence of quarterly stress testing — currently we run annually”.

Boards trust specific evidence. They do not trust general claims. When you anchor every claim, the deck reads as if the presenter has done the work. When you leave claims unanchored, the deck reads as if the presenter has skimmed. AI cannot do this move for you, because the agent does not know which evidence is true for your specific company. This is editorial work that must be human. The most common reason AI-generated slides feel generic is precisely this absence of anchored evidence.

Move three: replace generic language with insider phrasing

Every organisation has its own vocabulary. The way your company refers to its customers, its competitors, its operating model, its strategic priorities — these are linguistic markers that signal “the person who wrote this works here”. AI does not have access to your internal language. It uses the generic corporate vocabulary present in its training data, which is the vocabulary of consulting reports, annual statements, and strategy textbooks.

The third move is to read every slide and replace generic phrases with the language your board actually uses. If your CEO consistently calls the market “the addressable opportunity” rather than “the TAM”, change every instance. If your operations team refers to incidents as “events” rather than “issues”, change them. If your customers are “members” or “clients” or “partners” — never “users” — change them. These edits are small. The cumulative effect is large. A deck written in your company’s language reads as insider. A deck written in generic corporate language reads as outsider, regardless of whether the author is the CEO.

Split comparison infographic showing AI-generated raw output versus AI-edited board-ready output across three dimensions: headline style, claim evidence, and language register

Move four: cut the slides that exist to “sound complete”

AI-generated decks tend to produce more slides than the argument needs, because the underlying prompt usually asks for completeness. “Build a strategy refresh deck for the board” produces a deck that covers everything a strategy refresh deck might cover, including sections that are not relevant to your specific situation. The fourth move is to read every section and ask “would this section’s removal weaken the argument?” If the answer is no, remove the section.

The complete framework for AI-assisted executive presentations

Build executive-grade presentations with AI assistance. The AI-Enhanced Presentation Mastery course is a self-paced programme with eight modules, eighty-three lessons. Enrol with this month’s cohort, work through at your own pace — two optional live coaching sessions are fully recorded. £499, lifetime access to materials.

  • 8 modules, 83 lessons of self-paced course content
  • 2 optional live coaching sessions, fully recorded — watch back anytime
  • No deadlines, no mandatory session attendance
  • New cohort opens every month — enrol whenever suits you
  • Lifetime access to all course materials

Explore the AI-Enhanced Programme →

Designed for senior professionals who need AI to produce executive-grade output.

Common candidates for cutting include macro-environment scene-setting that the board already lives inside; competitor profiles for competitors the board does not consider strategically relevant; appendices that exist because the AI defaulted to producing them; and “principles” or “values” slides that signal a strategy team’s thinking process rather than the board’s decision criteria. A twenty-eight-slide deck rarely needs to be twenty-eight slides. Eighteen well-edited slides almost always read sharper than the same content stretched across twenty-eight.

Cutting is harder than adding. AI tends to over-include. Senior judgement is what subtracts. The board will not miss the slides you cut. They will notice the cleaner argument that results.

Move five: install the decision sentence

The fifth move is to identify what the board needs to take away from the deck — the actual decision, recommendation, or judgement you want them to land on — and to install that sentence in three places: the closing line of the executive summary slide, the headline of the strategic recommendation slide, and the closing slide before any appendix. The same sentence, in the same words, in three places.

AI drafts almost never do this. They produce closing slides that summarise key themes (“In summary, the strategy refresh focuses on three priorities…”), which is not the same as installing a decision the board can act on. The decision sentence has a specific shape: a verb, a quantified action, a timeframe, and a qualifier. “Approve a phased twelve-month investment of £4.2m to consolidate the European platform, contingent on the operational checkpoint at month six.” That sentence can be voted on. “Focus on European platform consolidation” cannot.

Installing the decision sentence in three places is deliberate redundancy. The board reads slowly. Some members read only the executive summary. Some read only the strategic recommendation slide. Some read only the closing. Repeating the decision sentence guarantees that every reader sees it, regardless of where their attention lands. If you want to see how to structure these decision sentences across an entire deck, the AI-Enhanced Presentation Mastery course covers the decision-sentence pattern in module four with worked examples for board, investment committee, and executive committee scenarios.

Move six: read it aloud against the board’s likely reaction

The final move is the cheapest and the most consistently skipped. Read the deck aloud, slide by slide, and after each slide ask “what would each of the board members say to this?” Name them in your head. The CFO who probes assumptions. The chair who asks for the unintended consequences. The non-executive director who challenges the timing. The CEO who tests whether the recommendation is too cautious or too bold. For each likely reaction, ask: does the slide already address it, or do I need to add a line?

Some slides will need additional context. Some will need a caveat the AI omitted. Some will need an explicit “what we considered and rejected” line that pre-empts the board’s natural alternative-generation. These additions are small. They turn a deck that looks complete on paper into a deck that holds up live. The aloud-read also reveals language that looks acceptable on screen but sounds awkward when spoken — almost always a sign of phrasing the AI inserted that needs replacement.

This sixth move is what separates decks that get approved from decks that get parked for a follow-up meeting. The first five moves clean the deck up. The sixth move makes it land in the room.

Need the slide structures and templates the editorial pass refines?

The Executive Slide System — £39, instant access — includes 26 slide templates, 93 AI prompts, and 16 scenario playbooks for senior presentations. Use the templates as the structural target your AI draft is editing toward.

Get the Executive Slide System →

FAQ

How long does the editorial pass take for a thirty-slide AI-generated deck?

Done in order, the six moves typically take seventy-five to ninety minutes for a thirty-slide deck. Done out of order or partially, the same work usually takes two to three hours and produces inconsistent results. The order matters because each move targets a specific failure pattern, and earlier moves clear ground for later ones to land more easily. The headline rewrite, in particular, exposes weaknesses in the underlying argument that the next moves can then address.

Can I use AI to do the editorial pass too?

Partially. AI can flag bullets that lack evidence and suggest replacements where the evidence exists in your source documents. AI cannot replace generic language with your company’s insider vocabulary, because it does not have access to your internal language. AI cannot decide which slides to cut, because the cutting decision rests on judgement about what the board actually cares about. The fastest workflow is human-led editorial pass with AI used to flag candidate fixes — not the other way round.

Will the board notice that AI was used?

Boards rarely care about the tooling. They care about whether the deck reads as senior thinking from inside the business. A well-edited AI-assisted deck will not draw any specific reaction beyond the normal probing the deck content invites. A poorly-edited AI-assisted deck will draw the same reaction as a poorly-prepared deck of any origin: probing questions about why the argument is generic. The disclosure question is a non-issue if the editorial pass has done its work. If you want the framework for handling direct AI-disclosure questions when they do arise, the three-step response structure handles them in under thirty seconds.

Does this editorial pass apply to other AI tools, not just Copilot?

Yes. The six moves are tool-agnostic. They target the failure patterns of generic AI output regardless of whether the underlying model is Copilot, ChatGPT, Claude, or Gemini. The patterns are the same because the training data overlaps. The pass works on any AI-generated executive deck.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Executive Presentation Checklist — a single-page review you can run on any AI-assisted draft before the editorial pass.

Next step: take the next AI-generated deck on your calendar and run the six moves on it in order. Track the time it takes. Note which moves expose the weakest parts of the underlying argument. Those are the moves you will get faster at — and the ones that will most consistently produce approved decks.

Related reading: The Copilot Agent Mode workflow that produces editable executive drafts.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.