Tag: AI presentations

08 May 2026
Professional woman in a navy blazer typing on a laptop at a glass-walled office.

Copilot Agent Mode for Executive Presentations: Three Workflows That Save Senior Leaders Four Hours

Quick answer: Copilot Agent Mode is most useful to senior leaders when it runs multi-step jobs end to end — not single-prompt slide generation. The three workflows that consistently move a four-hour executive deck job to twenty minutes are the source-document compression workflow, the strategic narrative draft workflow, and the objection-mapped Q&A pre-mortem. Each one chains research, structuring, and drafting into a single instruction set the agent executes while you do other work.

Henrik runs strategy at a mid-cap European insurer. Last quarter he was asked to present a market-entry analysis to the executive committee with three days’ notice. The full input pile was eighty-four pages — a McKinsey scoping memo, an internal pricing model, two regulatory briefings, and the previous quarter’s competitive review. He spent the first day reading. He spent the second day building outline drafts in Word. He spent the third evening assembling slides at home, having already missed a parents’ evening for his daughter. The deck went well. The process broke him.

Three months later he was asked for a similar piece on a different market. This time he opened Copilot Agent Mode at 09:00, fed it the source documents, gave it a single multi-step instruction, and stepped away for forty minutes. By the time he came back, the agent had produced a structured narrative outline, a draft of the headline slide for each section, and a Q&A preparation document anticipating the eight most likely committee objections. The full deck still required Henrik’s editorial judgement. But the four hours of preparation work that used to crush his evenings was now a twenty-minute review of agent output before lunch.

The difference between the two experiences was not better prompting. It was a different mode of using AI. Single-prompt Copilot — the chat box approach — produces one output for one input. Agent Mode chains research, structuring, drafting, and review into a single autonomous run. For senior leaders who are time-poor and judgement-rich, this is a structurally different tool, and the workflows that suit it are not the workflows you would use in chat.

Looking for the structured framework for using AI in executive presentation work?

The AI-Enhanced Presentation Mastery course is the self-paced framework for senior professionals using AI to build executive-grade presentations. Eight modules, eighty-three lessons, monthly cohort enrolment, two optional recorded coaching sessions.

Explore the AI-Enhanced Programme →

Agent Mode versus single-prompt Copilot

The mental model most senior leaders carry from earlier ChatGPT use is single-prompt: you ask, the model answers, you adjust, you ask again. That mental model is what makes Copilot feel like a slow assistant. You spend more time prompting than you save in output. The work is choppy. Context evaporates between turns. By prompt twelve you are repeating yourself.

Agent Mode reverses the structure. Instead of one prompt at a time, you give the agent an instruction with multiple sub-steps, a defined output, and access to source documents or tools. The agent then runs the steps in sequence, calling tools as needed, and returns the completed work product. You review and edit. You do not iterate prompt by prompt.

The shift is from “AI as conversation partner” to “AI as task-running junior analyst”. For executive presentation work — where the inputs are messy, the structure is established, and the output needs to look like senior thinking — the second model is materially more useful. Three workflows in particular consistently take a four-hour preparation job to twenty minutes of editorial review.

Comparison infographic showing single-prompt Copilot versus Agent Mode for executive presentations across four dimensions: input type, output style, presenter time required, and best-use scenario

Workflow one: source-document compression

The first workflow exists because senior leaders are routinely asked to present material they did not write themselves. A scoping memo from the strategy team. Two analyst reports. A regulatory briefing. A pricing model. The job is not to summarise — it is to produce a ten-minute executive narrative from eighty pages of mixed-format source material.

The agent instruction has four parts. First, the document set: attach or reference all source files in one batch. Second, the output specification: a structured outline with no more than seven top-level sections, each section limited to forty words, each section flagged for the source it draws from. Third, the constraint set: highlight contradictions between sources rather than papering over them; flag any claim where the underlying evidence is one analyst’s opinion rather than a verifiable data point. Fourth, the audience frame: write the outline for an executive committee whose first question will be “what is the decision you want from us, and what could go wrong?”

What the agent returns is not a finished deck. It is a working outline that has done the synthesis work — the part that costs the most time and the least intellectual originality. You read the outline. You disagree with two sections. You rewrite one and reorder another. The total editorial pass takes fifteen to twenty minutes. The synthesis work that would have taken three hours of reading and outlining is already done.

The reason this workflow saves so much time is that the agent reads at machine speed and synthesises across documents simultaneously. A human presenter reads sequentially, holds context in working memory, and synthesises last. The agent does the reverse. Neither is “better thinking” — they are different cognitive shapes. For source-heavy executive briefs where the synthesis is mechanical and the judgement is editorial, the agent’s shape is faster.

Workflow two: strategic narrative draft

The second workflow takes the compressed outline and produces a slide-by-slide narrative draft. This is the step where most single-prompt Copilot use falls apart, because slide generation in chat tends to produce either generic structures (problem-solution-benefit, repeated indefinitely) or slides that look polished but say nothing.

The agent instruction is more directive. Specify the narrative arc: situation, complication, resolution, decision, risk. Specify the section count and the exact role of each section. Specify the slide format: one headline statement per slide, no more than three supporting bullets, no jargon that has not been defined in the preceding section. Most importantly, specify the headline syntax explicitly — “the headline of every slide must be a complete sentence that states a finding, not a topic. ‘Three regions account for sixty per cent of the addressable market’ is a finding. ‘Market analysis’ is a topic.”

The agent will then produce a draft that respects the narrative architecture. The draft will not be final-quality. The headlines will need sharpening. Some slides will read as if the agent did not fully understand a niche term. But the structural work — sequencing the argument, allocating points to slides, drafting the supporting bullets — is done. Your job becomes editorial: tightening twelve headlines and reorganising two sections, instead of building thirty slides from a blank page.

Two specific instructions tend to lift output quality dramatically. The first is “include a ‘so what’ line at the bottom of every slide that states the implication for the executive committee in one sentence.” The second is “after each section, draft a transition sentence that links the closing point of the previous section to the opening point of the next.” Both are simple to specify. Both are work the agent does well. Both are work that human presenters routinely skip when time-pressed, leaving decks with strong individual slides and weak overall flow. Senior professionals using AI well are getting more value from structured prompt patterns like these than from any single dramatic prompt.

Roadmap infographic of the three Copilot Agent Mode workflows for executive presentations: source-document compression, strategic narrative draft, and Q&A pre-mortem, with the editorial pass that ties them together

The complete framework for AI-assisted executive presentations

Move beyond basic AI usage. The AI-Enhanced Presentation Mastery course gives you eight self-paced modules and eighty-three lessons on using AI (including Copilot) to structure, draft, and refine presentations that work at senior levels. Two optional recorded coaching sessions. £499, lifetime access to materials.

  • 8 modules, 83 lessons of self-paced course content
  • 2 optional live coaching sessions, fully recorded — watch back anytime
  • No deadlines, no mandatory session attendance
  • New cohort opens every month — enrol whenever suits you
  • Lifetime access to all course materials

Explore the AI-Enhanced Programme →

Designed for senior professionals using AI to produce executive-grade output, not generic drafts.

Workflow three: objection-mapped Q&A pre-mortem

The third workflow is the one most presenters have never tried, and the one that produces the highest leverage when the deck reaches the room. The agent’s job here is to read the draft deck, model the executive committee’s likely concerns, and produce a structured Q&A preparation document that anticipates the eight most likely objections with draft responses.

The agent instruction names the audience explicitly: not “executives” but the actual committee. “The committee includes a CFO whose previous term included a major write-down on a similar acquisition; a CEO whose stated priority for the year is operational simplification; a Chief Risk Officer who has flagged regulatory complexity in three of the last four committee meetings.” That degree of specificity changes what the agent flags. Generic objections give generic responses. Named-stakeholder objections give responses you can actually rehearse.

The output specification asks for three things per objection. The likely phrasing — how the objection will actually be stated in the room. The structural weakness it exposes — what the proposal genuinely does not yet answer. The draft response — a two-sentence reply that acknowledges the concern, names the specific evidence in the deck that addresses it, and offers a follow-up commitment if the evidence is incomplete. This is not the same as an FAQ section in the appendix. It is preparation work for live performance.

What you get back is a document that surfaces holes in the proposal you would not otherwise have noticed before the meeting. Nine times out of ten, at least one of the agent’s anticipated objections turns out to be a real gap that needs addressing in the deck before presenting. The agent does not have committee context the way you do. But it does notice gaps with a different cognitive bias than your own — and that complementary bias is where the value lies.

The editorial pass that turns agent output into executive output

None of these workflows produce final-quality executive material on their own. The agent produces structured first drafts. The editorial pass — the human judgement applied to that draft — is what produces senior output. This is the part that nervous AI users skip and that experienced AI users obsess over.

Five things matter in the editorial pass. First, the headlines. Re-read every slide headline aloud and rewrite any that state a topic rather than a finding. The agent will get this right perhaps seventy per cent of the time. The other thirty per cent are where decks lose authority. Second, the numbers. Verify every quantitative claim against the source document. Agents hallucinate numbers, especially in compression workflows. Third, the section flow. Does the argument land harder by the end, or does it dissipate? If it dissipates, reorder. Fourth, the language register. Replace any phrasing that sounds like a generic AI tone — “leveraging synergies”, “in today’s dynamic landscape” — with the language your committee actually uses. Fifth, the omissions. What does the deck not say that you, as the human in the room, know matters? The agent does not have your situational awareness. You do.

If you want the structured patterns for each of these editorial moves — the headline rewrite framework, the number-verification checklist, the language-register adjustments — the AI-Enhanced Presentation Mastery course walks through them across eight modules, with worked examples for board, investment committee, and steering committee scenarios.

Need the prompt library to run these workflows tomorrow?

The Executive Prompt Pack — £19.99, instant access — gives you 71 ChatGPT and Copilot prompts designed for PowerPoint presentation work. Includes prompt patterns for source compression, slide drafting, and headline sharpening that work in both chat and Agent Mode.

Get the Executive Prompt Pack →

FAQ

Is Copilot Agent Mode different from regular Copilot in PowerPoint?

Yes. Regular Copilot in PowerPoint generates slides one prompt at a time within the application. Agent Mode runs multi-step tasks autonomously — reading source documents, structuring an outline, drafting headlines, anticipating objections — in a single instruction set, and returns the work product after a sequence of steps it has chosen and executed. For executive presentation work where the inputs are large and the steps are predictable, Agent Mode saves materially more time than chat-style prompting.

How long does an Agent Mode workflow actually take?

Each of the three workflows in this article takes between fifteen and forty minutes of agent runtime, depending on the size of the source documents. The presenter is not active during that time — the agent runs while you do other work. The presenter’s active time is the editorial pass at the end, which usually takes fifteen to twenty-five minutes per workflow. Total senior-leader time per workflow tends to be twenty to thirty minutes, replacing what was often two to four hours of manual preparation.

Will Agent Mode hallucinate numbers from my source documents?

It can, particularly in compression workflows where the agent restates figures from longer source material. Treat every quantitative claim in agent output as a flag for verification, not a finished statement. Build the verification step into your editorial pass: open the source, locate the figure, confirm the agent’s restatement is accurate. The time cost is small. The credibility cost of presenting a hallucinated number to an executive committee is large.

Can Agent Mode replace a junior analyst?

For specific tasks within the presentation workflow, it can replicate the work an analyst would have done in synthesis and first-draft slide generation. It cannot replace judgement, situational awareness, stakeholder knowledge, or the editorial decisions that turn a draft into a senior-level deck. The most useful framing is that Agent Mode is a tireless drafting partner whose work always needs senior review — not a substitute for the senior thinking that makes the deck land.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Executive Presentation Checklist — a single-page review you can run on any AI-assisted draft before sending it to a senior audience.

Next step: pick the next executive deck on your calendar that has source material attached, and run the source-document compression workflow on it before you do anything else. Allow yourself thirty minutes for the agent to work and twenty minutes for editorial review. Compare that to your usual preparation time. The gap is the value of switching from chat-style prompting to Agent Mode for this kind of work.

Related reading: Copilot Agent Mode executive deck workflow — the five-step structure, and why AI-generated slides look generic and how to fix the editorial pass.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.

08 May 2026
Mid-aged woman sits at a desk with an open laptop, resting her chin on her hand and gazing out a window, thoughtful.

Imposter Syndrome Using AI for Presentations: When You Feel You Are Cheating

Quick answer: The “I am cheating” feeling that surfaces when senior professionals use AI for presentations is a misread of the work. Imposter syndrome attaches to AI use because the AI does the visible drafting and the human does the invisible editorial judgement — so it looks, from inside, as if you contributed nothing. The reality is reversed. The judgement is the work. The drafting is the typing. Three reframes resolve the feeling without losing the productive caution underneath it.

Ines is a director of clinical operations in a mid-size pharmaceutical company. She had been using Copilot for three weeks before the feeling caught up with her. The feeling arrived during a steering committee meeting, mid-sentence, while she was presenting a deck she had drafted with AI assistance. She was making a strong point about supply chain resilience when an internal voice cut in: “You did not write this. You should not be presenting this. If they ask you something the deck does not cover, they will see you do not actually know it.”

The voice was loud enough that she lost her place for half a second. The committee did not notice. She recovered. The presentation went well. But the feeling stayed with her for the rest of the day and crystallised that evening into a question she put to a colleague over dinner: “Am I cheating? Should I just write the decks myself like I used to?” Her colleague, who had been using Copilot since launch, said something useful: “If you wrote the prompt and you read the output and you decided what to keep and what to change, you wrote the deck. The keyboard is not where the work happens.”

That sentence is technically correct, and it does not always land in the moment because imposter syndrome is not technically responsive. The cheating feeling has its own logic, and arguing with it head-on rarely works. What does work is understanding why the feeling shows up specifically with AI — and then applying three reframes that change the underlying perception, not just the surface argument.

Looking for a structured way to manage performance anxiety in high-stakes presentations?

Conquer Your Fear of Public Speaking is a self-paced programme designed for senior professionals who experience performance anxiety in high-stakes presentation work. Practical techniques for the in-the-moment recovery you can use in any meeting.

Explore the Programme →

Why the cheating feeling shows up

Imposter syndrome activates when there is a perceived gap between what others believe you contributed and what you privately know you contributed. AI use opens that gap by design. The audience sees a polished deck. You know that some of the structure came from a model. The two pictures do not match in your head, and the mismatch reads as deception.

The feeling intensifies if your professional identity is tied to “I produce my own work”. Many senior leaders built their careers on visible production — writing the strategy memo, building the financial model, drafting the board paper themselves. AI changes the labour mix. You still own the output, but the labour is distributed differently. The labour distribution change feels like an identity threat, even when the output quality is equal or higher.

It also intensifies in environments where AI use is technically allowed but socially ambiguous. If your employer has not explicitly endorsed AI for presentation work, but has not explicitly forbidden it either, you are operating in a grey zone. The grey zone amplifies imposter feelings because there is no external validation that what you are doing is acceptable. Your nervous system fills the validation vacuum with the worst-case interpretation: that you are doing something you would not want to admit to.

Cycle infographic showing the imposter syndrome loop in AI-assisted presentation work: AI produces visible draft, human applies invisible judgement, audience sees only the polished output, presenter feels the gap as cheating

Visible drafting versus invisible judgement

The cleanest way to understand what is actually happening is to separate the visible work from the invisible work. The visible work in a deck is the typing, the layout, the wording of bullets, the choice of charts. The invisible work is the prior thinking — what to include, what to leave out, what the argument should be, which evidence carries weight, how the audience will react, where the political risk lies, what the closing decision needs to be.

For a senior-level presentation, the invisible work is roughly eighty per cent of the value. Anyone with passable Copilot skills can produce a polished thirty-slide deck on any topic in twenty minutes. Almost no one can produce a deck that lands with a specific board on a specific decision in a specific organisational moment without the invisible work that comes from years of internal context.

When you use AI for the visible work, you are outsourcing the part that has the lowest unit value of your time. You retain the invisible work — the editorial judgement that decides which AI output to keep, which to rewrite, which to cut, which to anchor with internal evidence the model could not have known. This is the work the audience cannot see, and it is also the work that your imposter voice is failing to credit. The voice notices that you typed less. It does not notice that you decided more.

Reframe one: the typing is not the work

The first reframe is to separate effort from value. There is a deeply ingrained association between visible effort and earned credit, particularly in cultures where being seen to work hard is part of the professional identity. AI breaks that association by making the visible effort smaller while leaving the cognitive load roughly constant.

The reframe is simple to state and harder to internalise: the typing is not the work. The work is the judgement applied to what gets typed. A surgeon’s value is not in the physical incision — it is in knowing where, how deep, and when to stop. The incision is the visible part. The training and judgement underneath are the invisible part. AI makes the executive presentation analogous to the surgical analogy. The model does the incision. You do the judgement.

This reframe lands harder when you can name a specific decision you made on the most recent AI-assisted deck that the model could not have made. “I cut the section on European expansion because I knew the chair would push back on the timing — the model did not know that.” “I rewrote the headline on slide eleven because the original was technically correct but politically tone-deaf for our CFO — the model did not know that.” Naming the specific decisions that required your judgement is the most direct route to dissolving the cheating feeling. The decisions are real. They are the work.

Reframe two: AI is a tool, not a co-author

The second reframe targets the way the imposter voice tends to anthropomorphise AI. The voice often phrases the concern as “the AI wrote this, not me” — which assigns agency to the model. The model has no agency. It cannot decide what to write. It can only produce probabilistic next-tokens based on the prompt you supplied and the editorial decisions you made along the way.

The framing that helps is to compare AI to other tools you do not feel imposter syndrome about. You do not feel guilty using Excel to calculate a forecast you could have done by hand. You do not feel guilty using PowerPoint instead of drawing slides on acetate. You do not feel guilty using a spell-checker. The reason is that those tools are clearly tools — they execute under your direction, they have no agency, they do not “co-author” the output.

AI feels different because it produces something that looks like prose, and prose feels like authored content. But the AI is no more an author of your deck than Excel is the author of your forecast. It is a tool that executes your direction. The difference between a Copilot draft and an Excel formula is purely surface-level — both are deterministic outputs of inputs you supplied. The structured workflows that produce executive output reinforce this — the agent is following your instruction set, not writing the deck.

Contrast panels infographic showing the imposter syndrome perception versus the actual contribution split in AI-assisted presentation work: typing versus thinking, drafting versus editing, surface versus judgement

Practical techniques for performance anxiety in senior presentation work

Conquer Your Fear of Public Speaking is a self-paced programme for professionals who experience anxiety, imposter feelings, or in-the-moment nerves during high-stakes presentations. Designed for the executive audience — practical recovery techniques you can use mid-meeting, not generic advice. £39, instant access.

  • Self-paced lessons covering pre-meeting preparation
  • In-the-moment recovery techniques for live presentation moments
  • Frameworks for managing the imposter voice that surfaces under pressure
  • Designed for senior professionals in high-stakes scenarios

Get Conquer Your Fear of Public Speaking →

Designed for senior professionals managing performance anxiety in board, investor, and executive presentation contexts.

Reframe three: the question your imposter voice is really asking

The third reframe goes one layer deeper. The “am I cheating” question is rarely the actual question underneath. When senior professionals dig into what the imposter voice is genuinely worried about, the underlying question usually turns out to be one of three things, and each one has a different response.

The first underlying question is “if they ask me something off the slides, will I look foolish?” This is a competence question, not an authorship question. The answer is not to abandon AI — it is to do the depth work that prepares you to answer questions beyond the deck content. The deck is one slice of your knowledge. AI helped you produce the slice. Your years of context are what handle the questions. Use the time AI saves you to deepen your audience preparation, not to do less work overall.

The second underlying question is “if they find out I used AI, will they think less of my contribution?” This is a social-acceptance question. The honest answer is that some audiences will, particularly in environments that are still adjusting to AI norms. The right response is not concealment, which feeds the imposter voice. The right response is matter-of-fact disclosure when asked, framed around the editorial judgement that produced the final output: “Yes, I used Copilot to draft the structure; the analysis and the recommendation are mine. The AI saved me about three hours.”

The third underlying question is “if AI can do this, what am I actually contributing?” This is an identity question, and it deserves a serious answer rather than a deflection. AI cannot do the invisible work — the situational awareness, the political read, the executive context, the judgement that comes from having been in the room before. Those are your contribution. AI use highlights this contribution by stripping away the typing that used to obscure it. If you find your contribution unclear after AI strips the typing away, that is useful information about where to focus your professional development. The right response is to invest in the parts of your work AI cannot do, not to retreat from AI use to preserve the visible parts it can.

The productive caution worth keeping

None of these reframes are about silencing all hesitation around AI use. There is a productive caution underneath the imposter feeling that is worth preserving — the caution that prompts you to verify numbers the AI generated, to check the source of claims, to read the deck aloud against the audience’s likely reaction, to take responsibility for what reaches the room. That caution is the editorial judgement at work. Keep it. It is the difference between AI-assisted senior output and AI-flavoured generic output.

The reframes target the unproductive part of the feeling — the part that says you are not entitled to present material because you used a tool to draft it. That part is wrong, and feeding it makes you a worse presenter, not a more honest one. Concealing AI use because the imposter voice told you to leads to evasive answers when audiences ask direct questions, which damages credibility more than the AI use itself ever would.

The senior professionals who handle this transition cleanly tend to land on a stable framing: AI is a tool I use to do my work faster; the work itself — the judgement, the decisions, the editorial pass — is mine; if asked, I will say so plainly; if not asked, I will not perform a confession that is not required. The editorial pass is what makes the difference between AI output that lands and AI output that gets pushed back. That pass is yours. The cheating voice is misreading the labour. Do not reorganise your career around its mistake.

Want a structural framework that anchors your editorial judgement?

The Pyramid Principle Template is a free reference for structuring executive briefings — lead with the answer, then prove it. Useful as the structural target your editorial pass is editing toward. Free download.

Get the Pyramid Principle Template →

FAQ

Should I tell people I used AI to draft the deck?

If you are asked directly, yes. Honesty handles the question once and removes the imposter loop entirely. If you are not asked, you do not owe a proactive disclosure unless your organisation requires one. Performing a confession that was not requested often draws more attention to AI use than a matter-of-fact answer would. The framing that works in either case is “I used AI to draft the structure; the analysis and recommendation are mine” — which credits both the tool and the judgement honestly.

Why does the cheating feeling get worse the better the AI gets?

Because the gap between visible AI contribution and invisible human judgement gets larger as the model improves. Earlier AI tools produced obviously rough output that you visibly had to fix; the editorial work was visible because the gaps were visible. Better models produce smoother output that needs subtler editorial work; the gaps are no longer visible to you, even though they are still there. The judgement work has not disappeared — it has just stopped being noticeable. The reframe is to deliberately track the editorial decisions you are still making, even when they feel small.

Is imposter syndrome about AI different from regular imposter syndrome?

It has the same underlying mechanism — a perceived gap between contribution and credit — but a different trigger. Regular imposter syndrome is triggered by promotion, scope expansion, or visibility increases. AI-related imposter syndrome is triggered by the labour distribution change. The mechanism is the same; the trigger is new. The same techniques that help with regular imposter syndrome — naming specific contributions, reality-testing the worst-case interpretation, talking to peers — also help here. The first reframe in this article is the AI-specific addition.

What if my anxiety about using AI is severe enough to disrupt my presentation performance?

If the cheating feeling intensifies during the presentation itself rather than dissolving with the reframes, the underlying issue is performance anxiety more than imposter syndrome about AI specifically. The AI use is the trigger but not the cause. Practical techniques for in-the-moment anxiety — controlled breathing, the structured pause, the recovery sentence — work the same way regardless of whether AI was involved in producing the deck. The deck is yours to present once you are in the room. The earlier the anxiety pattern is addressed, the less it will surface in subsequent presentations.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Pyramid Principle Template — the framework that gives your editorial judgement a structural target to edit toward.

Next step: name three specific editorial decisions you made on the last AI-assisted deck you produced. Write them down. Re-read them when the cheating voice next surfaces. The decisions are real. The voice is misreading them.

Related reading: The Copilot Agent Mode workflow that makes editorial judgement the senior contribution.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.

08 May 2026
Middle-aged man in a navy suit sits at a conference table during a business meeting with others nearby.

“Did You Use AI for This?” — How to Answer When a Board Member Asks

Quick answer: When a board member asks if you used AI to build the deck, the answer is yes (if you did). The deflection that ruins careers is the hesitation, not the truth. Use the three-part response: confirm tool use plainly, name the part you owned, name the verification you applied. The whole reply takes under thirty seconds. Done well, the question dissolves and the room moves on. Done badly — with hedging, irritation, or evasion — the question becomes the meeting.

Kenji was eight minutes into a quarterly results presentation when the non-executive director on his right tilted her head and said, gently but clearly, “Just a quick one — did you use AI for any of this?” The room went quiet in the way rooms do when an unscripted question lands. Kenji’s first instinct was to say “no, of course not” — even though he had used Copilot to draft the structure and roughly half the headlines. The lie would have been easy. It also would have been a career-shaping mistake.

He took a beat. He said: “Yes — I used Copilot to draft the structure, and I rewrote the analysis and the recommendation myself. The numbers in slide six and slide nine I personally verified against the source data.” Total response time: seventeen seconds. The non-executive director nodded once, said “thanks”, and the room moved on. By the end of the meeting nobody mentioned the AI question again, and Kenji’s recommendation was approved.

What saved Kenji was not the truthfulness alone, although the truthfulness mattered. It was the structure of the answer. The three-part response — confirm, own, verify — handles the question cleanly because it gives the room everything it needs to assess your credibility in one short reply. Most presenters who fumble this question do so because they have not pre-built the structure. They are composing under pressure, and what comes out is hedging, defensiveness, or over-explanation. All three escalate the question instead of resolving it.

Looking for a structured way to handle tough questions in executive Q&A?

The Executive Q&A Handling System is designed for senior professionals who need to handle tough questions, calm authority, and decision-safe answers under board-level pressure.

Explore the Q&A System →

Why the question gets asked

“Did you use AI for this?” is rarely the literal question. It is a proxy for one of three underlying concerns the board member has not stated explicitly. Understanding which concern is in play tells you what your response actually needs to address.

The first underlying concern is verification. The board member has spotted a phrasing, a claim, or a piece of language that does not feel like it came from someone who knows the business. They are checking whether what they are looking at has been verified by a human who understands the context. The right response anchors the verification work — the parts you personally checked against source data, the editorial decisions you made on top of any AI draft.

The second underlying concern is governance. Some board members are tracking AI use as a corporate risk topic — data privacy, intellectual property, model bias, regulatory exposure. The question is partly about you and partly about the organisation’s broader AI posture. The right response acknowledges the tool use without minimising it and signals that the work was done within whatever AI guidelines are in place.

The third underlying concern is competence. The board member wants to know whether you, the presenter, can answer questions beyond what is on the slides — or whether the AI has produced material you could not defend if pressed. The right response demonstrates ownership of the analysis and recommendation: not “the AI thinks”, but “I think”. The competence concern is the most common driver of the question and the one that most rewards a confident, structured reply.

Dashboard infographic showing the three underlying concerns behind the AI use question — verification, governance, and competence — with the response element each concern requires

The three-part response structure

The structure has three parts, in this order. Reordering or skipping any of them weakens the response. Each part is a short sentence. The whole reply takes between fifteen and thirty seconds.

Part one: confirm tool use plainly. “Yes — I used Copilot to draft the structure.” Or: “Yes — I used ChatGPT to summarise the source documents.” Or: “No, this was written by hand.” The plain confirmation does two things. It removes any sense that you are hesitating to admit something. And it answers the literal question, which clears the way for the parts that actually address the underlying concern.

The most common error here is qualifying the confirmation with a defensive softener. “Yes, but only for the structure.” “Yes, but I also rewrote everything.” “Yes, although obviously the analysis is mine.” The “but” and “although” signal that you think the AI use is something to apologise for, which contradicts the calm authority the room is reading you for. Confirm cleanly. The qualifying work belongs in part two, not part one.

Part two: name the part you owned. “The analysis and recommendation are mine.” Or: “The conclusion in slide twelve is my judgement; the model surfaced the framing question.” Or: “The structural sequence reflects my view of how the committee thinks; I used the AI to draft the headlines and then rewrote the ones that did not land.”

This part is where the competence concern gets resolved. You are explicitly naming what you contributed, in a sentence that demonstrates you can articulate the boundary between AI output and human judgement. Board members trust presenters who can name their contribution precisely. They distrust presenters who claim everything as their own (which is implausible after admitting AI use) or who minimise their own contribution (which suggests they did not really do the work).

Part three: name the verification you applied. “The numbers in slide six and slide nine I personally verified against the source data.” Or: “I cross-checked the regulatory citation in slide eight with our compliance team.” Or: “The competitive comparison was reviewed by our strategy lead before this meeting.”

This part addresses both the verification concern and the governance concern in one move. It signals that you did not simply pass through the AI output — you treated it as a draft that required senior verification. Specific verification details are more credible than general assurances. “I checked the numbers” is weaker than “the numbers in slide six and slide nine I verified against the source data”. Specificity buys credibility.

Five failure modes that escalate the question

The same question lands very differently depending on how it is handled. Five specific failure modes consistently escalate “did you use AI” from a passing query into a meeting-derailing exchange.

The hedge. “Well, I used some AI to help with parts of it…” This signals discomfort and invites follow-up. The board reads the hedge as evasion, not honesty. The fix is the plain confirmation in part one of the structure.

The denial. “No, I wrote the whole thing myself.” If this is true, say it. If this is false, do not say it. The risk-reward maths is stark: the upside of a successful denial is small; the downside of a denial that gets exposed (a chief of staff who knows you used Copilot, an artefact in the file metadata, a bullet that obviously came from a model) is career-defining. Never lie about AI use. The question is not worth the risk.

The over-explanation. “Yes, I used Copilot, but you have to understand that the way I use it is more like a research assistant than a writer, and obviously the conclusions are mine because the model couldn’t possibly know our specific situation, and I always verify everything…” Over-explanation reads as guilt. The board reads the length of your reply as a measure of your discomfort. Keep the answer to thirty seconds maximum. Anything longer triggers the suspicion the short answer would have prevented.

Stacked cards infographic showing five failure modes when answering 'did you use AI' — the hedge, the denial, the over-explanation, the irritation, and the technical lecture — with the corrected response for each

The complete framework for executive Q&A under pressure

The Executive Q&A Handling System is the structured framework for senior professionals presenting to boards and executive committees. Tough questions, calm authority, decision-safe answers in 45 seconds. £39, instant access.

  • Structured response patterns for the most common executive question types
  • Recovery techniques for when a question lands harder than expected
  • Frameworks for hostile questions, multi-part questions, and trap questions
  • Designed for board, investment committee, and executive committee scenarios

Get the Executive Q&A Handling System →

Designed for senior professionals managing high-stakes Q&A in executive presentation contexts.

The irritation. “Does it really matter how I built the slides?” Or: “I’m not sure why that’s relevant.” Both responses cast the question as inappropriate, which puts the questioner on the defensive and turns the exchange into a status confrontation. Even when you privately think the question is petty, do not signal that thought. Treat the question as legitimate, answer it cleanly, move on.

The technical lecture. “Well, the way Copilot Agent Mode works is that it chains multiple sub-tasks, and I gave it instructions to…” Board members did not ask for a tutorial on AI capabilities. They asked whether you used the tool. Stay at the level the question was asked. If they want technical detail, they will follow up.

Likely follow-up questions and how to handle them

If the three-part response is delivered well, follow-up questions are uncommon. When they do come, they tend to fall into a small number of patterns. Knowing the patterns lets you respond without composing under pressure.

“How do you know the AI didn’t make something up?” Address the verification process specifically. “Every quantitative claim in the deck I verified against the source documents — the model has a tendency to restate numbers in ways that are close but not exact, so I treat every figure as a flag for verification. The claims in slides four, six, and twelve I cross-checked with [name of the source / colleague / function].”

“Are we comfortable with this from a data privacy perspective?” This is a governance question and it deserves a governance answer. “I used the enterprise version of Copilot, which keeps data within our tenancy and does not train external models on our inputs. This complies with our current AI use guidelines.” If you do not know the answer to this question definitively, do not improvise. Say: “I followed the AI guidelines our IT team published in [month]. If you want a more detailed assessment, [name of CIO / DPO / equivalent] can give you the full picture.”

“Could you have produced this without AI?” Almost always yes, and you should say so. “Yes — it would have taken me about three additional hours of structuring and drafting time, which is the time AI saved on this deck. The analysis itself was the same work either way.” This handles the implicit doubt about competence by making clear that AI affected your speed, not your capability.

“What else have you used AI for?” Be honest, be brief, and be specific. “For executive presentation work, I use Copilot for first-draft structure, source-document compression, and Q&A pre-mortems. For [other categories of work], I follow the same pattern of AI draft plus human verification.” Avoid sweeping statements like “I use it for everything” or “almost nothing” — both invite follow-up. Naming specific workflows is more credible than describing your AI use in general terms.

The prevention move: pre-empting the question entirely

The cleanest handling of the AI question is the version where the question never gets asked, because the deck does not telegraph AI use. The board member who asked Kenji’s question did so because something in the deck — a slightly generic phrasing, a too-symmetrical structure — pinged her ear. If the editorial pass on the AI draft had removed those signals, the question might not have surfaced.

The prevention move is the editorial pass itself. Rewrite generic headlines as findings. Anchor every claim to specific evidence the audience recognises as internal. Replace AI-flavoured phrasing with your organisation’s actual vocabulary. Cut the slides the AI added because they “completed” a section. The same editorial moves that produce a deck that gets approved also produce a deck that does not invite the AI-use question. The editorial pass is the prevention.

None of this means concealment. If you are asked, you answer truthfully using the three-part structure. But the editorial pass means the question gets asked less often, because the deck reads as senior thinking from inside the business — which is what board members are looking for in the first place. The AI underneath becomes irrelevant. The deck is yours either way.

FAQ

What if I used AI but I genuinely cannot remember what was AI-drafted versus what I wrote?

This happens, particularly when the editorial pass has been thorough. The honest answer is “I used Copilot for the first draft and then heavily edited the result; the final version reflects my analysis, but I would not be able to point to a specific bullet and tell you whether the original wording came from the model or from me.” That answer is credible because it acknowledges the merged nature of the work without trying to claim authorship of every word. Most board members will accept it without follow-up.

Should I disclose AI use proactively even if not asked?

Usually no, unless your organisation has an explicit disclosure requirement or unless the deck includes a specific element (a quoted figure, a regulatory citation) that you want to flag for additional verification. Proactive disclosure tends to draw attention to AI use rather than normalise it, and it can read as defensive. The exception is environments where disclosure is genuinely expected — academic settings, some regulated industries, and any organisation with a stated AI-use disclosure policy.

What if a board member follows up with “I do not approve of AI use for board material”?

This is a values disagreement, not a competence question. Acknowledge the position without abandoning the work: “I understand. The decision in slide twelve is mine and I would land on the same recommendation regardless of how the deck was drafted. I am open to discussing the organisation’s broader AI use policy in a separate forum.” That response respects the disagreement, retains your ownership of the substance, and moves the discussion of AI policy off the meeting agenda.

Can a deck reveal AI use in ways I might not have noticed?

Yes — file metadata can sometimes show which application generated which content, and certain phrasings are recognisable as AI-typical to readers familiar with the patterns. The editorial pass is the safest way to remove the most common signals, but assume that any deck you send to a board could be analysed for AI use if a board member chose to. The honest-when-asked approach removes the risk of being caught in a denial and keeps your credibility intact regardless of what the metadata or phrasing might reveal.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Next step: write down your three-part response now, before the question is ever asked. Confirm sentence. Ownership sentence. Verification sentence. Read it aloud. Adjust until it sounds like you. The pre-built response is what holds when the live moment arrives.

Related reading: Why AI-generated slides look generic — and the editorial pass that prevents the AI-use question.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.

19 Feb 2026
Executive reviewing presentation data and charts on laptop before high-stakes Q&A session with leadership team

Copilot Agent Mode in PowerPoint: The 25-Minute Executive Deck Workflow

Last Tuesday I rebuilt a client’s 34-slide board deck in 25 minutes. Not because I’m fast — because I stopped fighting Copilot with one-shot prompts and switched to Agent Mode’s conversational workflow.

Quick answer: Copilot Agent Mode in PowerPoint works like a sharp junior colleague — it asks clarifying questions, remembers context across prompts, and makes multi-step improvements without you repeating yourself. The old model (write one detailed prompt, hope for the best, rebuild what it gets wrong) is replaced by a back-and-forth conversation where each prompt builds on the last. The result: executive-quality decks in 25 minutes instead of 3 hours. Below is the exact five-phase workflow I now use with every client deck, plus the prompting shift that makes Agent Mode dramatically more effective than standard Copilot.

The Prompt That Changed Everything

For the first six months after Microsoft launched Copilot in PowerPoint, I wrote elaborate one-shot prompts. Fifty words. A hundred words. Specifying audience, tone, slide count, layout, data points. The output was always the same: a starting point that needed 90 minutes of surgery.

Then Agent Mode rolled out and I tried something different. Instead of giving Copilot everything upfront, I typed: “I need a 10-slide board presentation on our Q4 results. Can you help me build it slide by slide? Start by asking what metrics matter most to my board.”

Copilot asked me four questions. Who’s the audience? What decisions need to happen? What’s the one thing the board needs to walk away knowing? What data do you have ready?

After I answered, it built the deck — and because it understood the context, the slides actually made sense. Not generic. Not stuffed with filler. Structured around the decision I needed. I spent 12 minutes refining instead of 90 minutes rebuilding. That was the moment I stopped writing one-shot prompts for executive decks.

📋 Every Agent Mode Prompt You Need — Organised by Scenario

Updated 27 March 2026 — Revised for the latest Microsoft Copilot and ChatGPT capabilities.

The Executive Prompt Pack gives you copy-paste prompts for building executive decks from scratch (board updates, budget requests, investor pitches, strategy, transformation), rescuing existing decks (audit, condense, rewrite titles, “make it C-suite”), and generating specific slide types (data, comparison, roadmap, closing). Plus the complete 25-minute executive deck workflow and power modifiers that improve any prompt.

Digital download. Copy-paste prompts by scenario. Tested extensively on client decks across banking, biotech, SaaS, and consulting.

Stop Guessing What to Type. Start Building in 25 Minutes.

The Executive Prompt Pack gives you 71 tested prompts for ChatGPT and Copilot — structured by scenario so you know exactly what to type:

  • Build from scratch — scenario prompts for board reviews, budget requests, and investor decks
  • Rescue and rewrite — audit an existing deck, condense it, or fix one slide at a time
  • Industry-specific prompts for financial services, banking, consulting, and executive audiences
  • Power modifiers that transform any prompt into board-ready output
  • The 25-minute deck workflow that replaces 3–4 hours of manual building

Works with ChatGPT, Microsoft Copilot, and Edit with Copilot (formerly Agent Mode). Updated March 2026.

Get the Executive Prompt Pack → £19.99

Standard Copilot vs Agent Mode: The Real Difference

Standard Copilot in PowerPoint works like a vending machine. You insert a prompt, it returns slides. No memory. No follow-up. No context from one prompt to the next. If the output is wrong, you start over with a different prompt.

Agent Mode works like briefing a colleague. You describe what you need, it asks questions, and then it builds — remembering everything you’ve said across multiple prompts. When you say “make slide 3 more visual,” it knows what slide 3 contains, what the deck is about, and who the audience is.

PAA: What’s the difference between Copilot and Agent Mode in PowerPoint?
Standard Copilot requires you to guide each step with separate, context-free prompts — typically 5-10 per deck. Agent Mode works conversationally: it asks clarifying questions, maintains context across prompts, and allows surgical edits (“make slide 3 more visual”) without you rewriting the entire instruction. Agent Mode typically needs 1-3 prompts per deck versus 5-10 for standard mode. Agent Mode availability varies by organisation, tenant, and rollout schedule — if you don’t see it, check your M365 Copilot licence and admin settings.

This matters for executive decks because senior audiences have specific requirements that standard Copilot can’t hold in memory: the decision being requested, the politics in the room, the metrics that matter to this particular CFO. Agent Mode holds all of that context across every prompt in the conversation. For a deeper look at prompt structure fundamentals, see the complete Copilot PowerPoint prompts guide.

The 25-Minute Executive Deck Workflow (5 Phases)

This is the exact workflow I now use for every executive deck. Five phases, 25 minutes, from blank PowerPoint file to boardroom-ready output.

Phase 1: The Conversational Brief (5 minutes)

Open PowerPoint → Copilot Chat → Tools → Agent Mode. Then paste this type of opening prompt: “I need a [slide count]-slide [presentation type] for [audience]. The decision I need from this meeting is [specific outcome]. Start by asking me what you need to know.”

Agent Mode will ask 3-5 clarifying questions. Answer them honestly and specifically. This is where most of the quality comes from — not from the prompts themselves, but from the context you provide when Agent Mode asks.

Phase 2: The Build (5 minutes)

Once Agent Mode has your context, it generates the deck. Review the structure — not the content yet. The order of slides matters more than the words on them at this stage. If the flow is wrong, tell Agent Mode: “Move the financial impact section before the recommendation” or “add a risk slide between the timeline and the ask.”

Phase 3: The Audit (5 minutes)

This is where the playbook earns its money. Paste the deck audit prompt: ask Agent Mode to identify the 3 weakest slides and suggest specific improvements for clarity and impact. Then for each flagged slide, run the rewrite. Agent Mode remembers the original context, so its rewrites are targeted — not generic.

Phase 4: Polish (5 minutes)

Use the 2026 canvas sequence: Auto-Rewrite → Make professional → Condense. This three-step combo tightens language, cleans formatting, and removes the padding that Copilot adds to every slide by default. Then generate speaker notes and run a consistency audit — Agent Mode checks for conflicting numbers, mismatched terminology, and tone shifts across the full deck.

Phase 5: Stress Test (5 minutes)

Ask Agent Mode to generate the three toughest questions your audience will ask — and draft response slides or talking points for each. This is the step most people skip and most people regret. A board member who finds a gap in your logic during Q&A will remember that gap, not your slides. For more on the full Copilot PowerPoint tutorial and latest features, see the complete guide.

Diagram showing the five-phase Agent Mode workflow: conversational brief five minutes, build five minutes, audit five minutes, polish five minutes, and stress test five minutes, totalling 25 minutes from blank file to boardroom-ready deck

For 71 tested prompts covering every scenario — build from scratch, rescue an existing deck, or fix individual slides — the Executive Prompt Pack gives you exactly what to type, updated for the latest Copilot and ChatGPT capabilities.

When You Already Have a Deck (The Rescue Workflow)

Half the time, you’re not building from scratch — you’re inheriting a 40-slide monster from last quarter that needs to be presentable by Thursday. Agent Mode handles this differently from standard Copilot because it can assess the full deck before making changes.

The rescue workflow has four steps. First, run the full deck audit: Agent Mode identifies the three weakest slides and gives you a fix direction for each. Second, condense — paste the “kill the text walls” prompt that targets slides with more than 5 bullet points or more than 30 words per slide. Third, rewrite slide titles: most corporate decks use label titles (“Q3 Revenue”) instead of insight titles (“Q3 Revenue Beat Target by 11% — Here’s What Drove It”). Agent Mode rewrites every title as an insight headline. Fourth, the “make it C-suite” pass: ask Agent Mode to rewrite the entire deck for a time-poor executive using the 8-second scan test — if a slide can’t be understood in 8 seconds, it gets simplified.

If you’ve ever wondered why your Copilot slides look generic, the rescue workflow fixes it — because Agent Mode uses the context of your specific deck, not generic templates.

🔧 Build From Scratch or Rescue What You’ve Got

Digital download (PDF). Copy-paste prompts organised by scenario. Designed for Agent Mode first, also works in standard Copilot.

The 3 Agent Mode Mistakes Everyone Makes First

Mistake 1: Treating it like standard Copilot. If you paste a 100-word one-shot prompt into Agent Mode, you’re wasting its best feature — the ability to ask you questions. Start with a brief context sentence and let Agent Mode pull the detail out of you through its clarifying questions. The prompts it generates from conversation are better than anything you’d write upfront.

Mistake 2: Skipping the audit phase. Agent Mode builds good first drafts. Not perfect first drafts. The audit prompt (“find the 3 weakest slides and suggest specific improvements”) is what turns a good deck into one that survives a boardroom. Most people generate and present. The professionals generate, audit, and then present.

Mistake 3: Ignoring power modifiers. Short phrases appended to any prompt that dramatically change the output: “lead with the headline,” “one key message per slide,” “format for scanning not reading.” These modifiers work because Agent Mode remembers them across subsequent prompts — unlike standard Copilot, which forgets everything after each interaction.

PAA: How do I use Agent Mode in PowerPoint?
Open PowerPoint with a Microsoft 365 Copilot licence. Click the Copilot Chat button in the ribbon, then select Agent Mode from the Tools menu in the prompt box. Start with a brief description of what you need (“I need a 10-slide board presentation on Q4 results”) and let Agent Mode ask clarifying questions before it builds. Agent Mode availability varies by organisation and rollout schedule — check your M365 Copilot licence and admin settings for current feature access.

PAA: Can Copilot build a presentation from scratch?
Yes — and Agent Mode does it significantly better than standard Copilot. With standard Copilot, you write one prompt and get a full draft that usually needs heavy editing. With Agent Mode, you have a conversation first: Copilot asks what the deck is for, who the audience is, what decisions need to happen, and what data you have. The resulting deck is more targeted because Agent Mode understood the context before it started building. Most professionals find that Agent Mode decks need 10-15 minutes of refinement versus 60-90 minutes for standard Copilot output.

⚡ Stop Guessing. Start Pasting.

The Executive Prompt Pack gives you the exact prompts — organised by scenario, not alphabetically. Board deck? Page 3. Budget request? Page 5. Rescuing a 40-slide disaster? Page 12. Every prompt is built around executive decision logic and tested on real client decks across multiple industries. Plus the 25-minute workflow, power modifiers, speaker notes prompts, and Q&A stress test.

Used by executives, consultants, and senior managers who present to time-poor decision makers. Digital download — start using it today.

71 Prompts. Every Scenario Covered.

Build from scratch, rescue an existing deck, or perfect individual slides — the Executive Prompt Pack covers every scenario. Works with ChatGPT, Copilot, and Edit with Copilot. Updated March 2026.

Get the Prompts → £19.99

Frequently Asked Questions

Do I need Copilot Agent Mode to use this playbook?

It’s designed for Agent Mode first — because Agent Mode asks clarifying questions and handles multi-step changes that standard Copilot can’t. But many of the prompts still improve results in standard Copilot, just with less “memory” and fewer multi-step edits. If your organisation hasn’t rolled out Agent Mode yet, you’ll still get better output from these structured prompts than from generic ones.

How is this different from the free prompts on your blog?

The blog posts teach prompt structure and individual techniques. The playbook is organised by scenario — you find your situation (board deck, budget request, deck rescue), paste the prompt, and go. It includes the complete 25-minute workflow, power modifiers, the deck audit and rescue sequence, slide-type prompts, and speaker notes and Q&A generation. It’s designed to sit next to your keyboard, not to teach you theory.

Will this work for my industry?

Yes — because the prompts are structured around executive decision logic (metrics, risks, outcomes, asks), not industry-specific jargon. I’ve tested these prompts on decks across banking, biotech, SaaS, consulting, and public sector. If your audience makes decisions from slides, these prompts are built for you.

Get monthly Copilot updates + presentation strategies

I test every new Copilot feature on real client decks before recommending it. Get what actually works, delivered monthly.

Subscribe to The Winning Edge

Related: Agent Mode can build your slides — but it can’t present them for you. If presentation anxiety is what’s really holding your career back, read Presentation Anxiety Is Ruining My Career — What Actually Fixes It.

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years across banking and consulting — including JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank — she has supported presentations for high-stakes funding rounds and approvals across 15+ years of executive training.

She tests every Copilot feature on real client decks before recommending it, and has trained professionals on AI-enhanced presentations across banking, biotech, SaaS, and consulting.

Book a discovery call | View services

Your next step: Open PowerPoint, go to Copilot Chat → Tools → Agent Mode, and paste this: “I need a [number]-slide [type] presentation for [audience]. The decision I need from this meeting is [outcome]. Start by asking me what you need to know.” See how different the output is when Copilot understands the context first. Then grab the full playbook to have every scenario prompt ready when the next deck is due.

16 Feb 2026
Your Data Slides Are Killing Your Presentation. Here's How AI Can Fix That.

Your Data Slides Are Killing Your Presentation. Here’s How AI Can Fix That.

The CFO had pasted an entire Excel tab — thirty-seven rows of quarterly figures — onto a single slide. Then he asked the board to “take a moment to absorb the numbers.”

Quick answer: AI data visualisation for presentations can transform unreadable spreadsheet dumps into clear, persuasive visual charts — but only when you tell it what the data means first. The process is not “paste my data and make it pretty.” It’s a three-step human-led workflow: decide the insight (what does this data prove?), choose the visual type (comparison, trend, composition, or distribution), then use AI to generate, label, and refine the chart. AI handles the visual execution. You handle the strategic thinking. The result is data slides that make a point rather than display a table.

At Commerzbank, I sat through a quarterly review where the Head of Risk presented a slide with a forty-two-cell table comparing capital adequacy ratios across eight business lines and four quarters. Every cell was filled. Every number was accurate. Nobody in the room knew what it meant.

After the meeting, the Group Treasurer said to me: “I have no idea whether we’re in trouble or not.” The data was perfect. The communication was useless.

I helped him rebuild that slide. We replaced the table with a single bar chart showing one thing: which business lines were above the threshold and which were below. Three were red. The rest were green. The next board meeting lasted half the time and produced twice the decisions. Same data. Different visual. Completely different outcome.

Why Data Tables Fail in Executive Presentations

Data tables work in reports. They fail in presentations. The reason is cognitive: a table asks the reader to perform analysis, while a chart provides the analysis already completed. When you paste a spreadsheet into a slide, you’re asking your audience to do the work you should have done before the meeting.

Senior executives are processing information from dozens of sources across dozens of meetings. They don’t have the cognitive bandwidth to scan forty-two cells, identify the relevant comparisons, and draw their own conclusions — all while you’re talking over the top of the slide. A data table in a presentation is not information. It’s a homework assignment.

The result is predictable. Executives either tune out (because the table is overwhelming), or they focus on the wrong number (because without visual hierarchy, every number looks equally important). Either way, your data fails to do its job, which is to support a specific point that drives a specific decision.

This is why data-heavy presentations often backfire with executives. The problem isn’t the data. It’s the format. And this is precisely where AI can help — not by thinking for you, but by transforming your thinking into a visual that communicates instantly.

PAA: Why do data-heavy slides fail in presentations?
Data tables require the audience to perform their own analysis — scanning cells, making comparisons, and drawing conclusions — while simultaneously listening to the presenter. Executive audiences don’t have the cognitive bandwidth for this. Charts solve the problem by pre-digesting the analysis: they show the conclusion visually so the audience can absorb the insight in seconds rather than minutes. The presenter’s job is to decide the insight first, then choose a visual format that makes that insight obvious.

Turn Data Into Decisions — Not Decoration

AI-Enhanced Presentation Mastery includes the complete data visualisation workflow: the Insight–Implication–Action framework, AI prompt sequences for chart creation, and the visual decision matrix that tells you which chart type to use for any dataset. Self-study programme — join anytime.

Join AI-Enhanced Presentation Mastery → £249

Self-study programme with live support. Join anytime — all released modules available immediately. Built from 24 years presenting financial data in corporate banking. Check course page for current pricing and session details.

The Insight-First Method (Before You Touch AI)

The biggest mistake people make with AI data visualisation is starting with the data. They paste a spreadsheet into an AI tool and ask it to “make a chart.” The result is a technically correct but strategically useless visualisation — because AI doesn’t know what point you’re trying to make.

Before you touch AI, answer one question: What does this data prove?

Not “what does this data show” — that’s a description. “What does this data prove” forces you to state a conclusion. Examples of the difference:

“This data shows Q3 revenue by region” → a description that leads to a table.
“This data proves that EMEA revenue recovered faster than expected” → an insight that leads to a chart with EMEA highlighted.

“This data shows customer satisfaction scores” → a description that leads to a grid.
“This data proves that satisfaction dropped in the two months after the platform migration” → an insight that leads to a trend line with the drop circled.

Once you have the insight, you can tell AI exactly what to visualise — and more importantly, what to emphasise. “Create a bar chart of Q3 revenue by region. Highlight EMEA in gold. Grey out all other regions. Add a horizontal line showing the forecast.” That prompt produces a useful chart because you’ve done the thinking. AI does the drawing.

This is the Insight–Implication–Action framework we teach in the course: every data slide should state the insight (what the data proves), the implication (what it means for the audience), and the action (what needs to happen next). AI can’t generate any of those three things. But once you’ve defined them, AI can create the visual that communicates them instantly.


(770×450)Insight-First Method showing three steps: decide the insight then choose the visual then use AI to create and refine

📊 Want the complete Insight–Implication–Action framework and AI prompt sequences?

AI-Enhanced Presentation Mastery includes the data storytelling module with before/after transformations and the visual decision matrix.

Join AI-Enhanced Presentation Mastery → £249

How AI Transforms Data Into Visual Clarity

Once you’ve identified the insight, AI becomes genuinely powerful. Here’s the workflow for transforming a data-heavy slide into a clear visual:

Step 1: Give AI the data AND the insight. Don’t just paste your spreadsheet. Tell AI what you want the audience to take away. “Here is our quarterly revenue data. The key insight is that EMEA recovered to 94% of target while APAC stayed at 71%. Create a horizontal bar chart that makes this comparison obvious. Use gold for EMEA and grey for APAC. Include a vertical line at the 100% target.” The more specific your instruction, the more useful the output.

Step 2: Ask AI to simplify, not add. AI’s instinct is to include everything. Your instinct should be to remove everything that doesn’t support the insight. “Remove the gridlines. Remove the exact values from bars under 50%. Make the chart title a complete sentence: ‘EMEA Revenue Recovered to 94% — APAC Still Lagging.'” The best data slides look almost empty. That’s the point — the insight should be impossible to miss.

Step 3: Use AI to generate the headline. Your slide title should state the conclusion, not describe the content. AI is excellent at rewriting “Q3 Revenue by Region” into “EMEA Recovery Outpaced Forecast — APAC Needs Intervention.” Give AI your insight and ask it to write a headline that a time-poor executive would understand without looking at the chart. If the headline alone tells the story, you’ve succeeded.

This three-step process — insight, simplify, headline — takes five minutes per slide and produces results that are dramatically more persuasive than any table, regardless of how much data that table contains.

If you want to go deeper on how to match your AI prompts to executive presentation needs, the key is always the same: tell AI what the data means before asking it to visualise the data.

PAA: How do I use AI to create charts for presentations?
Start by defining the insight your data proves — not just what it shows. Then give AI both the data and the insight in a single prompt, specifying the chart type, what to highlight, and what to remove. Ask AI to write the slide headline as a complete sentence that states the conclusion. The process takes about five minutes per slide and produces charts that communicate instantly rather than requiring the audience to decode a table.

From Spreadsheet Dump to Executive Clarity

Module 6 of AI-Enhanced Presentation Mastery covers data storytelling in depth — including the Insight–Implication–Action framework, the visual decision matrix, AI prompt sequences for chart transformation, and before/after examples from real executive presentations. Study at your own pace.

Join AI-Enhanced Presentation Mastery → £249

Self-study programme with live Q&A calls. Join anytime — all released modules available immediately. Check course page for current pricing and session details.


Before and after comparison showing spreadsheet table transformed into a clear highlighted bar chart with insight headline

The Four Chart Types That Cover 90% of Executive Data

You don’t need twenty chart types. You need four. Almost every data insight an executive needs to communicate falls into one of these categories:

1. Comparison: “How do these things stack up?” Use horizontal bar charts. Revenue by region, performance by team, budget vs actual. AI prompt: “Create a horizontal bar chart comparing [X]. Highlight the top performer in gold and the underperformer in red. Grey out the middle. Title should state who’s winning.”

2. Trend: “What’s changing over time?” Use line charts. Revenue trajectory, customer satisfaction over quarters, headcount growth. AI prompt: “Create a line chart showing [X] over [time period]. Highlight the inflection point where the trend changed. Add a brief annotation explaining what caused the change. Title should state whether the trend is positive or negative.”

3. Composition: “What’s the breakdown?” Use stacked bars or pie charts (but only for 3–5 segments — more than five and the pie becomes useless). Revenue mix, cost allocation, market share. AI prompt: “Create a stacked bar chart showing [X] breakdown. Highlight the largest segment. Title should state what dominates.”

4. Distribution: “Where does the data cluster?” Use scatter plots or histograms. Customer segments by value, project risk ratings, team performance distribution. AI prompt: “Create a scatter plot showing [X] vs [Y]. Circle the outliers. Title should state the pattern — whether it’s clustered, spread, or has notable outliers.”

When you’re unsure which chart type to use, ask yourself: “Am I comparing, tracking, breaking down, or distributing?” The answer picks the chart. Then tell AI which category and let it handle the execution. This is considerably more effective than the approach covered in data storytelling fundamentals, because AI handles the visual execution while you focus entirely on the strategic framing.

📊 The visual decision matrix and AI prompt templates for all four chart types are inside the course.

AI-Enhanced Presentation Mastery includes the complete data visualisation system — frameworks, prompts, and before/after examples.

Join AI-Enhanced Presentation Mastery → £249

What AI Cannot Do With Your Data (The Human Part)

AI is excellent at the mechanical parts of data visualisation — creating charts, formatting them, writing headlines, standardising colours. But there are four things AI cannot do, and they’re the four things that actually matter:

AI cannot decide what’s important. Your dataset might contain fifty data points. Only three of them matter to your audience. Which three? That depends on who’s in the room, what they care about, and what decision you’re asking them to make. This is strategic judgment, not data analysis. AI can’t do it.

AI cannot read the political room. Sometimes the data shows something uncomfortable — a team underperforming, a region in decline, a project over budget. How you visualise that data depends on whether you’re presenting to the team responsible (where diplomacy matters) or to the board (where directness matters). AI doesn’t know the politics. You do.

AI cannot tell you what’s missing. The most dangerous data slide is the one that’s technically accurate but strategically incomplete. If your chart shows revenue growth but doesn’t show margin erosion, it’s misleading. AI won’t flag what you’ve left out. Only someone who understands the full business context can do that.

AI cannot determine the “so what.” Every data slide needs to answer one question: “So what?” Revenue grew 12% — so what? Is that good? Compared to what? What should we do about it? The “so what” is the entire point of the slide, and it requires human judgment about context, expectations, and next steps.

The best data slides are 80% human thinking and 20% AI execution. AI makes the visual. You make the point.


Four things AI cannot do with your data: decide importance, read the room, spot what is missing, determine the so what

PAA: Can AI replace human thinking in data presentations?
No. AI is excellent at the visual execution — creating charts, formatting them, writing headlines — but it cannot determine what’s important, read political dynamics in the room, identify what data is missing, or decide the “so what” that makes a slide actionable. The most effective workflow uses AI for 20% of the work (visual execution) and human judgment for 80% (strategic framing, audience awareness, and insight selection). AI is the pen. You’re the author.

Learn the Complete System for Executive Data Slides That Drive Decisions

AI-Enhanced Presentation Mastery teaches you the human-led, AI-assisted approach to executive presentations — including the Insight–Implication–Action framework, the visual decision matrix, AI prompt sequences, and the data storytelling techniques built from 24 years presenting financial data in corporate banking and 15 years coaching executives through high-stakes decision meetings.

Join AI-Enhanced Presentation Mastery → £249

Self-study programme with live support. Join anytime — all released modules available immediately. Built from 24 years presenting financial data in corporate banking + 15 years coaching executives. Check course page for current pricing and session details.

Frequently Asked Questions

What if my audience expects to see the full data table?

Put the table in the appendix. Present the chart in the main deck. If someone asks “where are the detailed numbers?” you say “slide 22 in the appendix” and continue with your insight. This gives you the best of both worlds: visual clarity in the presentation and full data availability on request. In twenty-four years of corporate banking, I’ve found that the executives who request the detailed table almost never actually read it — they just want to know it’s there.

Which AI tools are best for data visualisation?

Any AI tool that can process text prompts and generate charts works — ChatGPT, Claude, Copilot in PowerPoint. The tool matters less than the prompt. A specific prompt (“Create a horizontal bar chart comparing Q3 revenue by region, highlight EMEA in gold”) produces dramatically better results than a vague prompt (“Make a chart from this data”) regardless of which tool you use. The Insight-First Method works with any AI platform.

How do I handle sensitive financial data with AI tools?

If your data is confidential, use anonymised or rounded figures for the AI-generated chart, then manually replace them with the real numbers in your final slide. AI needs the structure and proportions to create the right visual — it doesn’t need the exact numbers. Alternatively, use AI only for the chart template and formatting, then input your data directly. Many organisations have approved AI tools with enterprise-grade data protection for this purpose.

Does this work for non-financial data?

The Insight-First Method works for any data type: project timelines, customer satisfaction scores, employee engagement metrics, operational KPIs, marketing funnels. The principle is the same — decide the insight before you create the visual, tell AI what to emphasise, and write a headline that states the conclusion. The four chart types (comparison, trend, composition, distribution) cover 90% of any data you’ll present in a corporate setting.

📬 The Winning Edge Newsletter

Weekly strategies for executive presentations, AI-enhanced workflows, and career-critical stakeholder communication. No fluff.

Subscribe free →

🎯 Free: 10 Essential AI Prompts for Executive Presentations

Includes the data visualisation prompts for all four chart types, plus headline rewriters and the slide clarity check sequence.

Download free →

Related: Data slides are one piece of the puzzle. If you’ve been thrown into a last minute presentation and need to build a complete deck fast, the 5-slide emergency framework helps you decide which data to include and which to cut — before you start visualising anything.

Stop pasting spreadsheets into slides. Decide the insight first. Choose the right visual. Let AI handle the execution. Your audience will thank you — and your data will finally do its job.

🎯 Learn the human-led, AI-assisted approach to executive presentations.

Join AI-Enhanced Presentation Mastery → £249

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she spent two decades watching executives paste spreadsheets into slides — and helping them transform that data into visuals that actually drove decisions.

A qualified clinical hypnotherapist and NLP practitioner, Mary Beth combines executive communication expertise with modern AI-enhanced workflows to help leaders present data with clarity and conviction.

Book a discovery call | View services

15 Feb 2026
Professional at desk comparing presentation layouts on screen, actively building slides in a bright modern office with natural lighting

Teach AI Your Presentation Style (So It Stops Sounding Generic)

Quick answer: AI makes presentations faster but also makes them generic — because most people prompt AI with what they want to say, not how they say it. To teach AI your presentation style, you need three things: a style brief (your tone, sentence patterns, and vocabulary), a structure framework (your preferred message architecture), and a critique loop (prompts that make AI edit its own output against your standards). This turns AI from a content generator into a strategic co-creator that sounds like you, not like everyone else.

Presenting this week? Do this in 15 minutes:

1. Write a 200-word style brief (your tone + vocabulary + 2 sample paragraphs)
2. Pick one structure rule: AVP for persuasion slides, 132 Rule for overall flow
3. Paste both into your AI tool before your content brief
4. Generate your first draft
5. Run the critique loop: “Does this follow my structure? Remove any phrase I wouldn’t use.”

That’s the free version. The AI-Enhanced Presentation Mastery course (£249) gives you the complete system — style brief template, all four frameworks, 30+ critique prompts, and a reusable AI playbook.

A marketing director showed me two versions of the same quarterly business review.

Version A was the one she’d written herself: sharp, direct, slightly dry. Her signature move was opening every section with a one-line insight before the data. The CFO loved it because he could scan the headlines and get the story in sixty seconds.

Version B was the one she’d asked ChatGPT to create from the same data. It was technically correct. Every point was there. But it read like it had been written by a committee — smooth, cautious, with phrases like “leveraging synergies” and “driving alignment across stakeholders” that she would never use in real life.

She said: “It’s faster but it’s not mine. The CFO would read this and think someone else wrote it. Which defeats the entire purpose.”

That conversation crystallised something I’d been seeing across every executive I work with: AI doesn’t have a quality problem. It has a voice problem. And the voice problem exists because nobody teaches you how to train AI on your style — your frameworks, your vocabulary, your preferred message structure. They just teach you how to write prompts. Which is like teaching someone how to use a steering wheel without explaining where they’re driving.

Why Every AI Presentation Sounds the Same

AI language models are trained on billions of words of internet text. When you ask one to “write a slide about Q3 performance,” it draws on the average of everything it’s ever seen about quarterly performance slides. The result is competent, generic, and indistinguishable from what everyone else is getting.

This is the fundamental problem with how most people use AI for presentations. They prompt for content — “write me five bullet points about customer retention” — and get content that could have come from anyone in any company in any industry. The content is accurate. It’s also forgettable.

The professionals who actually benefit from AI do something different. They don’t ask AI to generate content. They ask AI to execute their thinking — using their frameworks, their vocabulary, their preferred structure. The AI does the heavy lifting, but the output carries their signature.

The difference shows up immediately. When you give AI a bare prompt, you get generic corporate language. When you give AI your style brief and your message framework, you get output that sounds like a faster, more productive version of you.

This is exactly the approach taught in AI-enhanced presentation creation — structure first, AI second. The structure is what makes the output yours. The AI is what makes it fast.

PAA: Why do AI-generated presentations sound so generic?
Because most people prompt AI with what to say, not how to say it. AI defaults to the statistical average of everything it’s been trained on — which means smooth, corporate, committee-style language. To get output that sounds like you, you need to provide your style brief (tone, vocabulary, sentence patterns), your preferred message architecture (frameworks like AVP or the 132 Rule), and a critique prompt that makes AI edit its own output against your standards.

Stop Getting Generic Output. Start Getting Output That Sounds Like You.

AI-Enhanced Presentation Mastery teaches you how to build AI workflows that preserve your voice, your frameworks, and your communication style. 8 modules covering structure, messaging, slide design, data storytelling, and a complete personal AI playbook you’ll reuse for every presentation.

Get AI-Enhanced Presentation Mastery → £249

Currently £249 — launch pricing ends March 1st (£399 self-study / £750 live cohort).

Self-paced modules + live support + lifetime access.

The Style Brief: Teaching AI Your Voice in 5 Minutes

A style brief is a short document — 200 words maximum — that tells AI how you communicate. Not what you want to say. How you say things.

Here’s what goes into an effective presentation style brief:

Tone descriptors. Three to five words that describe your communication style. Examples: “direct, evidence-led, slightly dry humour” or “warm, structured, practical” or “analytical, precise, minimal adjectives.” AI language models respond dramatically to tone descriptors — they shift the entire register of the output.

Vocabulary preferences. Words you use and words you don’t. If you say “stakeholders” but never say “key stakeholders,” that matters. If you write “clients” instead of “customers,” specify it. If you avoid phrases like “leverage,” “synergise,” or “circle back” — tell the AI. This alone eliminates 80% of the generic feel.

Sentence pattern. Short sentences? Long analytical sentences? A mix? Do you open sections with a question or a statement? Do you use first person or third person? AI copies these patterns surprisingly well when you provide examples.

Two sample paragraphs. The most effective style brief includes two paragraphs you’ve actually written — presentation notes, an email to your boss, a section from a report. Not “ideal” writing. Your actual writing. AI learns more from your real voice than from your aspirational voice.

Once you have your style brief, you include it at the start of every AI conversation about presentations. The difference is immediate and, frankly, startling. Output goes from “could be anyone’s” to “sounds like mine” in a single prompt.


Personal AI presentation playbook showing four components: style brief, structure frameworks, critique prompts, and never-use list

📊 Want the complete style brief template? AI-Enhanced Presentation Mastery (£249) includes the full style brief builder, sample briefs for different professional styles, and the prompt architecture that makes AI output match your voice consistently.

Structure-First AI: Why Frameworks Beat Freeform Prompts

The second reason AI presentations sound generic: people ask AI to create structure and content simultaneously. This is like asking a builder to design the house and construct it at the same time — you get something functional but unremarkable.

The professionals who get the best results from AI use a structure-first approach: they define the message architecture before AI writes a single word.

In the AI-Enhanced Presentation Mastery course, we teach three frameworks that work exceptionally well as AI instructions:

AVP (Action-Value-Proof). Every slide follows a three-part structure: what you want the audience to do (Action), why it matters to them (Value), and the evidence that supports it (Proof). When you give AI the AVP framework as an instruction — “Structure every slide using Action-Value-Proof” — the output immediately becomes more persuasive and more structured than freeform prompting.

The S.E.E. Formula (Statement-Evidence-Example). For slides that need to present data or make a case: lead with the insight statement, follow with the evidence that supports it, then provide a concrete example that makes it real. This stops AI from producing the generic “here are five data points” output and forces it to tell a story with every slide.

The 132 Rule. For overall presentation flow: one opening message (the 1), three supporting sections (the 3), two closing elements — a summary and a call to action (the 2). This gives AI a macro-structure that prevents the wandering, unfocused presentations that AI tends to produce when given open-ended briefs.

When you combine your style brief with a structure framework, AI stops guessing and starts executing. The output isn’t generic because it was never given the chance to be — you’ve constrained it with your thinking, your architecture, and your standards.

If you’re currently using ChatGPT prompts for presentations, adding structure frameworks to those prompts will transform the quality of what you get back.

PAA: How do you get better results from AI for presentations?
Provide structure before content. Instead of asking AI to “create a presentation about X,” give it your message framework (like AVP or the 132 Rule), your style brief (tone, vocabulary, sentence patterns), and the specific decision you want the audience to make. AI excels at executing within constraints — the tighter the framework, the better the output. Freeform prompts produce freeform results.

Frameworks First. AI Second. Your Voice Always.

AI-Enhanced Presentation Mastery gives you the AVP formula, S.E.E. wording framework, 132 Rule, and Insight-Implication-Action structure — then shows you exactly how to feed them to AI so every presentation sounds like you wrote it. Includes prompt packs, before/after transformations, and the complete AI workflow.

Get AI-Enhanced Presentation Mastery → £249

£249 launch pricing — ends March 1st (£399 self-study / £750 live cohort). Self-paced modules + templates + prompt packs + live support + lifetime access.

The Critique Loop: Making AI Edit Against Your Standards

Here’s what most people miss entirely: the first output AI gives you should never be the final version. AI is a first-draft machine. The magic is in the critique loop — using AI to edit its own output against your specific standards.

A critique prompt works like this. After AI generates your presentation content, you say: “Now review this output against these criteria: (1) Does every slide follow AVP structure? (2) Are there any phrases I wouldn’t use? Remove ‘leverage,’ ‘synergise,’ and ‘key stakeholders.’ (3) Is any slide trying to make more than one point? Split it. (4) Does the opening grab attention in the first sentence?”

This is essentially turning AI into your personal presentation coach — one that knows your standards because you’ve defined them explicitly.

The most effective critique prompts we’ve developed in the course follow three levels:

Level 1: Structure critique. “Does this presentation follow the 132 Rule? Is the opening message clear? Do the three middle sections support different aspects of the argument? Does the closing include both a summary and a specific call to action?”

Level 2: Messaging critique. “Review each slide against the S.E.E. formula. Does every claim have evidence? Does every evidence point have a concrete example? Flag any slide where the message is vague or abstract.”

Level 3: Voice critique. “Compare this output against my style brief. Flag any sentences that use passive voice (I use active). Remove any corporate jargon that isn’t in my vocabulary list. Shorten any sentence longer than 25 words.”

Running all three levels takes about 5 minutes. The result is output that’s been through a more rigorous editorial process than most people apply manually — and it sounds like you, not like an AI.

If you’re already using PowerPoint Copilot, layering a critique loop on top of its output is the single fastest way to improve quality.

📊 Want all three critique prompt levels? AI-Enhanced Presentation Mastery (£249) includes the complete critique prompt pack — structure, messaging, and voice — plus the Master Prompt Pack with 30+ prompts for every stage of presentation creation.

Building Your Personal AI Presentation Playbook

The goal isn’t to use AI better for one presentation. It’s to build a reusable system that makes every future presentation faster and more consistently excellent.

Your personal AI playbook is a single document — we provide the template in the course — that contains everything AI needs to produce your-quality output every time:

Your style brief (200 words — tone, vocabulary, patterns, samples).

Your preferred frameworks (AVP for persuasion slides, S.E.E. for evidence slides, Insight-Implication-Action for data slides, 132 Rule for overall structure).

Your critique prompts (three levels — structure, messaging, voice).

Your “never use” list (phrases, words, and structural patterns that aren’t your style).

Your before/after examples (two or three examples showing generic AI output transformed into your-style output — so AI can learn from the patterns).

When you start a new presentation, you paste the playbook into your AI conversation first, then give your content brief. The AI has everything it needs to produce first-draft output that’s already 80% there. The critique loop handles the final 20%.

This is the difference between using AI as a random content generator and using AI as a strategic co-creator. One saves you time. The other saves you time and makes your work better.

PAA: Can AI really match your personal presentation style?
Yes — if you train it properly. AI is exceptionally good at mimicking communication patterns when given explicit examples. The key is providing a style brief (tone, vocabulary, sentence patterns, sample paragraphs), structure frameworks (so AI doesn’t default to generic architecture), and critique prompts (so AI self-corrects against your standards). The AI Playbook approach means you set this up once and reuse it for every presentation, with improving results over time.

Building presentations this month?

The AI-Enhanced Presentation Mastery course includes the complete playbook template, the style brief builder, all four structure frameworks (AVP, S.E.E., 132 Rule, Insight-Implication-Action), and the full critique prompt pack. Start building your personal AI system this week — and notice the difference in your next deck.

⏰ Launch pricing ends March 1st. Currently £249 — planned increase after launch period (£399 self-study / £750 live cohort). Lock in launch pricing before it changes →

AI Should Sound Like You — Not Like Everyone Else

AI-Enhanced Presentation Mastery is the course that treats AI as your execution engine — not your replacement. 8 modules covering structure frameworks, messaging formulas, data storytelling, slide design, critique prompts, and the personal AI playbook that makes every future presentation faster and unmistakably yours.

Get AI-Enhanced Presentation Mastery → £249

⏰ £249 launch pricing — ends March 1st (£399 self-study / £750 live cohort). Self-paced modules + live support + templates + prompt packs + lifetime access.

Frequently Asked Questions

Do I need to be technical or know how to code to teach AI my style?

No. Everything in this approach uses plain language — the same language you’d use to brief a colleague. You write your style brief in natural English. You describe your frameworks in normal sentences. The AI does the technical translation. If you can write an email explaining how you like things done, you can build an AI playbook.

Does this work with ChatGPT, Copilot, and other AI tools?

Yes. The style brief and structure framework approach works with any AI language model — ChatGPT, Claude, Copilot, Gemini, or whatever comes next. The principles are about how you communicate with AI, not which AI you use. The course provides prompts formatted for the most popular tools, and the playbook is tool-agnostic.

How long does it take to build a personal AI playbook?

The initial playbook takes about 45 minutes to build using the course template. The style brief takes 15 minutes, the framework selection takes 10 minutes, the critique prompts take 10 minutes, and assembling your before/after examples takes 10 minutes. After that, you reuse the playbook for every presentation — updating it only when your style evolves or you discover new patterns.

What if I’m not sure what my communication style actually is?

This is more common than you’d think — and it’s one of the most valuable outcomes of the playbook-building process. Module 2 of the course includes a “style discovery” exercise where you analyse three pieces of your own writing to identify your natural patterns. Most people are surprised by how consistent their style is once they look for it. The exercise takes 20 minutes and gives you the foundation for everything else.

📬 The Winning Edge Newsletter

Weekly strategies for executive presentations, AI workflows, and career-critical communication. No fluff.

Subscribe free →

Related: Once your AI workflow is producing personalised output, you need the right slide structure to put it in. If you’re presenting pilot results, the 8-slide pilot-to-rollout structure gives you the decision deck framework. And if presenting triggers nerves despite strong preparation, the imposter syndrome pre-presentation reset addresses the nervous system patterns that override your confidence.

AI doesn’t have a quality problem. It has a voice problem. And the voice problem is yours to solve — by teaching AI your frameworks, your vocabulary, your standards, and your style.

Build the playbook. Use the critique loop. Start with your next presentation.

🎯 Want the complete system — frameworks, prompts, templates, and live support?

Get AI-Enhanced Presentation Mastery → £249 (launch pricing)

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she combines presentation psychology with AI workflow design to help professionals create faster, clearer, and more persuasive presentations.

A qualified clinical hypnotherapist and NLP practitioner, Mary Beth has spent 15 years training executives in communication strategy. AI-Enhanced Presentation Mastery is her flagship course for professionals who want to use AI as a strategic co-creator — not a replacement for their thinking.

Book a discovery call | View services

13 Feb 2026
Executive reviewing printed presentation slides with pen while comparing to AI-generated deck on screen

Your AI Presentation Has Structure. It Doesn’t Have Persuasion. Here’s the Missing Layer.

Quick answer: AI tools are excellent at organising information into clear, logical structures. What they consistently fail to produce is persuasion — the layer that makes executives act, not just nod. The S.E.E. formula (Story-Evidence-Emotion) is the human review layer that transforms AI-structured content into presentations that drive decisions. Below: exactly how it works, why AI can’t do it for you, and how to apply it to any AI-generated deck in under 20 minutes.

⚡ Presenting this week? Do this on your next deck in 7 minutes:

  • Story: Add one specific client or internal example to each major section (2 min)
  • Evidence: Add a benchmark or consequence to every data point (3 min)
  • Emotion: On your recommendation slide, answer: “What do I need them to feel?” (2 min)

Want the full system with templates for each step? Get the S.E.E. Templates + Workflow →

The Board Said “So What?” After a Deck That Took 6 Hours to Build.

A client — head of strategy at a mid-sized financial services firm — came to me after what she described as “the most embarrassing board meeting of my career.” She’d used AI to build a 22-slide strategic review. The structure was immaculate. Clear sections. Logical flow. Data on every slide. The AI had done exactly what she’d asked: organise the quarterly results into a coherent deck.

She presented for eighteen minutes. The board listened politely. Then the chairman said five words that made her stomach drop: “What do you want us to do?”

She had the data. She had the structure. She had the logic. What she didn’t have was a reason for anyone in that room to care — or act. The deck was informative. It wasn’t persuasive. And in a boardroom, informative without persuasive is just a well-organised waste of everyone’s time.

When we audited the deck together, the problem was obvious. Every slide followed the same pattern: here’s what happened, here are the numbers, here’s the next slide. No context for why the numbers mattered. No connection to what the board actually cared about. No emotional stakes. The AI had produced a report disguised as a presentation.

This is the gap that nearly every AI-generated presentation falls into. Not a structure problem. A persuasion problem. And it’s a gap that AI can’t close on its own — because making AI slides persuasive requires something AI doesn’t have: knowledge of what your specific audience fears, wants, and needs to hear before they’ll say yes.

🎯 Learn the Complete S.E.E. Framework Inside the Course

AI-Enhanced Presentation Mastery teaches you the full S.E.E. formula (Story-Evidence-Emotion) alongside AVP structure, the 132 Rule, and the Insight-Implication-Action framework for data — the complete system for turning AI output into presentations that drive executive decisions. Self-study modules releasing through April 2026, plus live Q&A sessions. Join anytime — you get all released modules immediately.

Get the S.E.E. Templates + Full Workflow →

Presale pricing: £249 — moves to £299 early bird, then £499 full price. 60-seat cap.

The Structure-Persuasion Gap: Why AI Output Feels Flat

AI is remarkably good at one thing: organising information logically. Give it data, a topic, and a prompt, and it will produce sections, headings, bullet points, and a sequence that makes rational sense. This is genuinely useful — it handles the tedious structural work that used to take hours.

But structure and persuasion are different skills. Structure answers “What information goes where?” Persuasion answers “Why should anyone care?” A well-structured deck can be completely unpersuasive. An unstructured but emotionally compelling argument can move a room. The ideal presentation has both — and AI consistently delivers only the first.

Here’s why. Persuasion requires three things AI doesn’t have access to: the specific context your audience is operating in, the emotional stakes attached to the decision, and the proof points that this particular group of people will find credible. AI can’t know that the CFO is worried about Q3 cash flow, that the board rejected a similar proposal six months ago, or that the CEO responds to client stories but switches off during spreadsheet reviews. These are human-intelligence inputs, and they’re exactly what transforms a structured deck into a persuasive one.

The reason most AI presentations fail isn’t that the AI is bad. It’s that the human skips the layer that makes AI slides persuasive, assuming structure is enough.

The S.E.E. Formula: Story, Evidence, Emotion

The S.E.E. formula is the persuasion layer you apply after AI has handled the structure. It stands for Story, Evidence, Emotion — three elements that, when woven into an AI-structured deck, transform it from a report into an argument that moves people to act.

Think of it this way: AI builds the skeleton. S.E.E. adds the muscle, the nervous system, and the heartbeat.

Each element serves a different persuasion function. Story provides context and makes your point memorable. Evidence provides credibility and makes your case defensible. Emotion creates urgency and makes your audience care enough to decide. A presentation that has all three is extremely difficult to dismiss. A presentation missing any one of them has a predictable failure mode.


Side by side comparison of AI output before and after applying the S.E.E. formula showing transformation from facts to persuasion

Layer 1: Story — The Context AI Doesn’t Know

Story in a business presentation doesn’t mean “once upon a time.” It means context — the specific situation that makes your recommendation relevant, urgent, and grounded in reality.

AI output typically starts with the general: “Market conditions have shifted.” “Customer satisfaction has declined.” “Revenue targets are at risk.” These statements are accurate but they don’t anchor to anything your audience can feel. They’re abstract. And abstract doesn’t persuade.

The S.E.E. Story layer asks you to add one specific, concrete example to each major section of your deck. Not fiction — a real situation from your organisation that illustrates the point.

For example, instead of AI’s “Customer churn has increased 12% year-over-year,” the Story layer adds: “When I spoke with three of our enterprise clients last month, two mentioned they’re evaluating competitors for the first time in four years. One said — and I’m quoting directly — ‘Your platform used to be ahead. Now it’s keeping pace.’ That’s the shift the 12% represents.”

Now the board isn’t processing a number. They’re processing a threat. The data hasn’t changed. But the context makes it matter.

This is something AI fundamentally cannot generate — because it doesn’t know which clients you spoke to, what they said, or which anecdote will land with this particular audience. It’s human intelligence applied to AI structure.

📋 The S.E.E. formula is one of six frameworks inside the course.

AI-Enhanced Presentation Mastery includes the complete system: AVP structure, 132 Rule, S.E.E. formula, data storytelling frameworks, plus AI prompt templates for each. Study at your own pace — modules releasing through April 2026.

Get All 6 Frameworks + AI Prompt Packs →

Layer 2: Evidence — Turning Data Into Proof

AI is very good at including data. It’s surprisingly bad at turning data into proof. There’s a crucial difference.

Data is a number. Proof is a number plus its implication. AI will give you “NPS declined from 72 to 61.” That’s data. Proof sounds like: “NPS declined from 72 to 61 — a drop below the threshold where enterprise clients typically begin vendor reviews, based on our last three contract cycles.”

The Evidence layer in S.E.E. asks you to do three things with every data point AI generates:

First, contextualise it. What does this number mean relative to a benchmark your audience recognises? Industry average, last quarter, a target they set, a competitor’s performance. Data without context is just a number. Data with context is a signal.

Second, source it credibly. AI often presents data without attribution. Executives discount unsourced numbers. Add where the data came from — even “based on our Q3 finance review” adds credibility. If it’s external data, name the source. If it’s your own analysis, say so.

Third, connect it to consequence. What happens if this number continues? What happens if it reverses? The consequence is what transforms data from interesting to actionable. The Insight-Implication-Action framework from the course formalises this — every data point needs an insight (what it means), an implication (why it matters), and an action (what to do about it).

This evidence layer is where AI-enhanced presentations diverge from AI-generated ones. The AI handles the organisation. You handle the meaning.

Layer 3: Emotion — The Decision Trigger

This is the layer most professionals skip, and it’s the one that matters most for executive decisions.

Executives don’t make decisions based on logic alone. Research in decision science consistently shows that emotion drives action — logic justifies it afterward. A presentation that’s logically perfect but emotionally flat produces “let me think about it.” A presentation that creates the right emotional response — urgency, opportunity, risk — produces “let’s move on this.”

The Emotion layer isn’t about manipulation. It’s about connecting your recommendation to something your audience genuinely cares about. Every executive in every meeting has emotional stakes: protecting their team, delivering on promises they’ve made, avoiding the embarrassment of backing the wrong initiative, capitalising on an opportunity before a competitor does.

AI can’t identify these emotional stakes because they’re not in any dataset. They’re in the politics, relationships, and pressures of your specific organisation. Only you know that the VP of Operations is under pressure to show efficiency gains. Only you know that the CEO mentioned supply chain risk at the last all-hands meeting. Only you know that this proposal’s biggest blocker lost a similar bet two years ago and is risk-averse as a result.

The Emotion layer asks one question for each key slide: “What does my audience feel about this — and what do I need them to feel instead?” If the current state is complacency, you need urgency. If the current state is fear, you need confidence. If the current state is scepticism, you need proof that reduces perceived risk.

This is the layer that took my client’s deck from “so what?” to a follow-up meeting where the board asked her to accelerate the initiative. Same data. Same structure. Different emotional framing.

📊 The Full Persuasion System — Not Just One Formula

AI-Enhanced Presentation Mastery teaches S.E.E. alongside five other frameworks that work together: AVP for slide structure, 132 Rule for information sequencing, Insight-Implication-Action for data storytelling, plus customised AI prompt templates that make each framework faster to apply. 8 self-study modules + 2 live Q&A sessions.

Turn AI Slides Into Executive Decisions →

Presale pricing: £249 — moves to £499 full price soon. Join anytime — get all released modules immediately.

Applying S.E.E. to Any AI Deck in 20 Minutes

Here’s the practical workflow. You’ve used AI to build your deck — structure is solid, data is in place, flow makes sense. Now apply S.E.E. in three passes:

Pass 1: Story scan (5 minutes). Review each major section. For each one, ask: “Is there a specific, concrete example from our organisation that illustrates this point?” Write one sentence per section — a client conversation, an internal metric, a project outcome, a competitor move. You’re adding the anchor that makes abstract data feel real. If you can’t find a story, the section may be filler.

Your AI workflow handled the structure. This pass handles the meaning.

Pass 2: Evidence upgrade (5–10 minutes). Review every data point. For each one, add: context (vs what benchmark?), source (where did this come from?), and consequence (what happens if this continues?). Delete any data that doesn’t have a clear implication. More data with no context is worse than less data with clear meaning. Senior leaders don’t need all the information — they need the right information, framed so the conclusion is obvious.

Pass 3: Emotion check (5 minutes). For each key decision slide — recommendations, proposals, asks — answer: “What does my audience currently feel about this topic? What do I need them to feel? What one change to this slide creates that emotional shift?” Sometimes it’s reframing the opening line. Sometimes it’s adding a consequence slide. Sometimes it’s removing a defensive caveat that signals your own uncertainty.

Total time: roughly 20 minutes on top of whatever the AI took to generate the deck. That 20 minutes is the difference between “good presentation” and “approved.”

🔍 Want the complete workflow — AI structure + S.E.E. persuasion + templates?

The course includes before/after deck transformations, S.E.E. wording templates, and AI prompt packs designed to make each pass faster. Study at your own pace.

Get the Complete AI → Executive Workflow →

How do I make AI presentations more persuasive?

Apply the S.E.E. formula after AI handles structure: add Story (specific examples from your organisation), upgrade Evidence (contextualise every data point with benchmarks and consequences), and layer in Emotion (connect your recommendation to what your audience cares about). This 20-minute review transforms AI output from informative to actionable.

Why do AI-generated presentations feel flat?

AI excels at logical organisation but lacks access to three persuasion inputs: the specific context your audience operates in, the emotional stakes attached to the decision, and the proof points this particular group will find credible. Without these, AI produces structured reports rather than persuasive arguments.

What is the S.E.E. formula for presentations?

S.E.E. stands for Story-Evidence-Emotion. Story provides concrete, real-world context that makes abstract data feel tangible. Evidence transforms raw numbers into proof by adding benchmarks, sources, and consequences. Emotion connects your recommendation to what your audience fears, wants, or needs — the trigger that turns understanding into action.

🏆 AI-Enhanced Presentation Mastery: The Complete System

S.E.E. is one framework inside a complete course that transforms how you build presentations with AI. What’s included:

  • AVP framework — Action-Value-Proof slide structure
  • 132 Rule — information sequencing for how brains process
  • S.E.E. formula — Story-Evidence-Emotion persuasion layer
  • Insight-Implication-Action — data storytelling framework
  • AI prompt templates — customised for each framework
  • Before/after deck transformations — real examples
  • 8 self-study modules — releasing through April 2026
  • 2 live Q&A sessions — April 2026
  • Lifetime access — all recordings, templates, and future updates

Designed for busy professionals who create presentations regularly and want to save hours while dramatically improving impact.

Get the Complete AI Presentation System →

Presale pricing: £249 — moves to £499 full price soon. 60-seat cap. Join anytime — get all released modules immediately.

Frequently Asked Questions

Can I use the S.E.E. formula with any AI tool?

Yes. S.E.E. is a human review layer applied after AI generates the initial structure. It works with ChatGPT, Copilot, Claude, Gemini, or any other AI tool. The formula is tool-agnostic — it addresses the persuasion gap that all AI tools share.

How is S.E.E. different from general storytelling advice?

General storytelling advice tells you to “add stories” without specifying where, what kind, or how they interact with data and emotional framing. S.E.E. is a systematic three-pass review designed specifically for AI-generated business presentations, with each layer serving a distinct persuasion function.

Do I need presentation design skills for this?

No. S.E.E. operates at the messaging and content level, not the design level. You’re changing what the slides say and how the argument is framed — not formatting or layout. The AI handles structure and design; you handle persuasion.

How long does the full AI-Enhanced Presentation Mastery course take?

The course is 8 self-study modules released between January and April 2026, designed for busy professionals. Each module takes 60–90 minutes. You study at your own pace, with 2 live Q&A sessions in April for questions and feedback. Lifetime access means you can revisit any material whenever needed.

📬 The Winning Edge Newsletter

Weekly strategies for AI-enhanced presentations, executive communication, and confident delivery. No filler.

Subscribe Free →

📥 Free: Executive Presentation Checklist

A quick-reference checklist for reviewing any executive presentation before delivery — including a simplified S.E.E. review prompt.

Download Free Checklist →

Related reading: The presentation was perfect — the Q&A lost the deal — once your deck has the persuasion layer, prepare for the decision-making conversation that follows.

Your next step: Take the last AI-generated deck you built. Run the three S.E.E. passes: Story scan (add one concrete example per section), Evidence upgrade (contextualise every data point), Emotion check (connect each recommendation to what your audience cares about). Twenty minutes. And if you want the complete system — S.E.E. plus AVP, 132 Rule, data storytelling, and AI prompt templates for each — AI-Enhanced Presentation Mastery (£249) gives you everything in one self-study programme.

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she has delivered high-stakes presentations in boardrooms across three continents.

A certified hypnotherapist and NLP practitioner, Mary Beth combines executive communication expertise with practical techniques for managing presentation nerves. She has trained senior professionals and executive audiences over many years.

Book a discovery call | View services

11 Feb 2026
Professional thinking strategically with AI interface, not just generating slides

AI Slides vs. AI Thinking: The Distinction That Changes Everything

“Make me a 10-slide presentation on Q3 results.”

That’s the prompt. And that’s the problem.

I watched a senior director spend 45 minutes “fixing” what AI had generated — adjusting layouts, rewriting headlines, deleting clip art nobody asked for. By the time he finished, he’d saved maybe 20 minutes compared to building it himself. And the result still felt… generic.

“AI presentations don’t work for executive content,” he told me afterwards. “They’re fine for internal updates, but anything important? I still have to do it myself.”

He was wrong. But not in the way he thought.

In 2026, the professionals pulling ahead aren’t the ones who’ve mastered AI slide generation. They’re the ones who’ve discovered that slides are the last thing AI should touch. The real leverage is upstream — in thinking, structure, and messaging. That’s the distinction nobody’s teaching.

Quick answer: “AI Slides” means using AI to generate visual outputs — layouts, formatting, design. “AI Thinking” means using AI as a strategic partner to clarify your message, structure your argument, and pressure-test your logic before you ever open PowerPoint. The distinction matters because AI is mediocre at slides but exceptional at thinking. Professionals who flip their workflow — thinking first, slides last — create presentations in half the time with dramatically better results.

Three years ago, I was skeptical of AI for presentations. I’d seen too many executives embarrassed by obviously AI-generated decks — the telltale signs, the generic phrasing, the “this could be about any company” feel.

Then I started experimenting with a different approach. Instead of asking AI to make slides, I asked it to help me think. To challenge my structure. To find holes in my argument. To translate my jargon into language my audience would actually understand. I was using AI as a thinking partner for presentations — not a production tool.

The presentations got better. Not because the slides looked fancier — they didn’t. But because the thinking was sharper. The message was clearer. The structure was tighter.

That’s when I realised: we’ve been using the most powerful thinking tool in history to do graphic design. It’s like using a Formula 1 engine to power a lawnmower. The real AI presentation strategy? Think first, slides last.

Why Most People Start at the Wrong End

The typical AI presentation workflow looks like this:

Step 1: Open AI tool
Step 2: “Create a presentation about [topic]”
Step 3: Review generated slides
Step 4: Fix everything that’s wrong
Step 5: Add what’s missing
Step 6: Rewrite what sounds robotic
Step 7: Wonder why this took so long

The problem isn’t the AI. The problem is the sequence.

When you ask AI to generate slides first, you’re asking it to make decisions it has no business making: What’s the core message? What does this audience care about? What’s the one thing you need them to remember? What action do you want them to take?

AI doesn’t know these things. So it guesses. And its guesses are generic because they have to be — it’s optimising for “probably relevant to most presentations about this topic” rather than “exactly right for your specific situation.”

The Upstream Problem

Great presentations aren’t great because of their slides. They’re great because of the thinking behind them.

Before you ever touch a slide, you need clarity on:

  • The decision you’re driving: What do you want your audience to do, approve, or believe?
  • The single message: If they remember one thing, what is it?
  • The structure: What sequence will move them from where they are to where you need them?
  • The proof: What evidence will make your argument undeniable?

These are thinking problems, not design problems. And this is exactly where AI excels — if you use it correctly.

🎓 AI-Enhanced Presentation Mastery

Learn to use AI as a strategic thinking partner, not just a slide generator. This self-paced programme teaches the frameworks, workflows, and prompts that transform how you create executive presentations — cutting creation time in half while dramatically improving impact.

Includes the AVP framework (Action-Value-Proof), the 132 Rule for structure, and a complete AI presentation workflow you can use immediately.

Join AI-Enhanced Presentation Mastery → £249

8 self-paced modules + 2 live coaching sessions + lifetime access. Study at your own pace.

What “AI Slides” Actually Produces

Let’s be honest about what happens when you ask AI to generate presentation slides:

The Generic Structure

AI defaults to safe, forgettable structures: Agenda → Background → Key Points → Summary → Next Steps. This structure works for everything, which means it’s optimised for nothing.

Your quarterly business review looks like every other QBR. Your investment pitch looks like every other pitch. Your strategic recommendation looks like a Wikipedia article with bullet points.

The Clip Art Problem

AI tools love adding visuals. Icons. Stock imagery. Decorative elements that fill space but add nothing. You spend half your editing time removing things nobody asked for.

The Voice Mismatch

AI-generated text has a tell. It’s slightly too formal, too hedged, too… diplomatic. “It is recommended that consideration be given to…” instead of “We should do X because Y.”

Executive audiences notice. They may not consciously identify it, but they feel it. The presentation lacks conviction. It sounds like it was written by a committee — because in a way, it was.

The Missing Insight

Most damning of all: AI-generated slides contain information, not insight. They tell you what happened, not what it means. They present data, not implications. They describe the situation, not the decision.

That’s the gap that kills executive presentations. And no amount of better prompting will fix it — because the problem isn’t the slides. It’s the thinking that should have happened first.


Comparison diagram showing AI for slides versus AI for thinking approaches

What “AI Thinking” Unlocks

Now consider a different approach. Before you generate a single slide, you use AI as a thinking partner:

Clarifying Your Message

“I need to present our Q3 results to the board. Our revenue is up 12% but margins are down. Help me identify the single message that positions this honestly while maintaining confidence in our strategy.”

AI won’t write your message for you. But it will help you find it — by asking questions, offering framings, and pressure-testing your logic.

Structuring Your Argument

“My audience is skeptical of this budget request. What objections will they have? In what sequence should I address them to build agreement before I ask for the money?”

This is strategic work. AI can help you map objections, sequence arguments, and identify proof points you might have missed.

Testing Your Logic

“Here’s my recommendation. Play devil’s advocate. What are the strongest counterarguments? Where is my reasoning weakest?”

Most presenters don’t stress-test their logic until they’re in the room, facing hostile questions. AI lets you do that work beforehand — privately, iteratively, without ego.

Translating Your Expertise

“I’m a technical expert presenting to non-technical executives. Here’s my explanation of the problem. Rewrite it so someone without engineering background understands why this matters.”

This is where AI shines — taking your expertise and making it accessible without dumbing it down.

Want the exact prompts and workflows? AI-Enhanced Presentation Mastery teaches you to use AI as a thinking partner — including the S.E.E. formula for making proof memorable.

Get the Course → £249

The Flipped Workflow

Here’s the workflow that actually works:

Phase 1: Think With AI (60% of your time)

Define the decision: What do you need your audience to do, approve, or believe?

Clarify the message: What’s the single idea that makes your case?

Map the audience: What do they already believe? What concerns will they have? What do they need to hear?

Structure the argument: What sequence moves them from skepticism to agreement?

Identify the proof: What evidence makes your case undeniable?

All of this happens before you open PowerPoint. AI helps you think through each step — challenging, refining, sharpening.

Phase 2: Draft With AI (25% of your time)

Only now do you create content — but not slides yet. You’re creating:

Headlines: One clear sentence per section that could stand alone

Key points: The 2-3 supporting facts for each headline

Transitions: How each section connects to the next

AI can help you draft these — but you’re editing and approving, not accepting wholesale.

Phase 3: Build Slides (15% of your time)

Now — finally — you build slides. But notice: the hard work is done. You know your message. You know your structure. You know your proof.

The slides are just containers for thinking you’ve already completed. They almost build themselves.

And if you want AI to help with layout at this point? Fine. But you’re giving it clear inputs, not asking it to guess.

📚 The Complete AI Presentation System

AI-Enhanced Presentation Mastery includes:

  • 8 self-paced modules on structure, messaging, and AI workflows
  • AVP Framework: Action-Value-Proof for executive-ready presentations
  • 132 Rule: The sequence your audience’s brain processes and remembers
  • Master Prompt Pack: Ready-to-use prompts for every stage of creation
  • 2 live coaching sessions for Q&A and feedback

Join AI-Enhanced Presentation Mastery → £249

Lifetime access. Study at your own pace. Join live sessions when convenient.

Frameworks That Make AI Useful

The difference between “AI Slides” and “AI Thinking” often comes down to having frameworks that guide the conversation. Here are three that transform how you work with AI:

The AVP Framework (Action-Value-Proof)

Every presentation should answer three questions in this order:

Action: What do you want the audience to do?
Value: Why should they care? What’s in it for them?
Proof: Why should they believe you?

When you structure your AI conversation around AVP, the outputs become dramatically more focused. Instead of “create a presentation about X,” you’re saying “help me articulate the specific action I’m asking for, the value proposition for this audience, and the proof points that support my case.”

The 132 Rule

Audiences process information in a specific sequence: one main message, supported by three pillars, each backed by two proof points.

This isn’t arbitrary — it’s how memory works. One thing is memorable. Three things are manageable. Two supports each point without overwhelming.

When you tell AI “structure this using the 132 Rule,” you get outputs that match how your audience’s brain actually works.

The S.E.E. Formula (Story-Evidence-Emotion)

For any proof point to land, it needs:

Story: A concrete example or scenario
Evidence: Data or facts that support the story
Emotion: Connection to what the audience cares about

Most AI-generated content has evidence without story or emotion. When you explicitly ask for S.E.E., you get proof that’s memorable and persuasive, not just accurate.

Learn these frameworks in depth. AI-Enhanced Presentation Mastery includes ready-to-use prompts that apply AVP, 132, and S.E.E. to any presentation challenge.

Get the Course → £249

The Real Difference

A colleague recently showed me two presentations on the same topic — a budget request for a new initiative.

Presentation A was AI-generated. Polished slides. Professional layouts. Comprehensive information. It took 30 minutes to create. The executive committee said “interesting” and asked to revisit it next quarter.

Presentation B was AI-enhanced. Simpler slides. Less polish. But the message was razor-sharp, the structure anticipated every objection, and the proof points were undeniable. It took 90 minutes to create. The executive committee approved it on the spot.

Presentation B wasn’t better because it had better slides. It was better because the presenter had used AI to think, not just to make.

That’s the distinction that changes everything.

🎯 Transform How You Create Presentations

AI-Enhanced Presentation Mastery teaches you to use AI as a strategic thinking partner — not just a slide generator. You’ll learn:

  • The flipped workflow that cuts creation time in half
  • Frameworks (AVP, 132 Rule, S.E.E.) that make AI outputs executive-ready
  • Prompts for every stage — from clarifying your message to stress-testing your logic
  • How to transform data into stories people actually understand

Join AI-Enhanced Presentation Mastery → £249

8 self-paced modules releasing through April 2026. Join anytime — get immediate access to all released content. Lifetime access included.

📬 PS: Weekly strategies for AI-enhanced presentations and executive communication. Subscribe to The Winning Edge — practical techniques from 24 years in corporate boardrooms.

Frequently Asked Questions

Does this mean I should never use AI to generate slides?

Not at all. AI can be helpful for initial layouts, especially for routine presentations. But for anything high-stakes — board presentations, investment pitches, strategic recommendations — the thinking work should come first. Use AI for slides last, not first.

Which AI tools work best for the “thinking” approach?

Any conversational AI works — ChatGPT, Claude, Gemini. The tool matters less than how you use it. The key is treating it as a thinking partner (asking questions, getting feedback, refining ideas) rather than a production tool (generate this output for me).

How long does the “flipped workflow” actually take?

For a typical executive presentation, the thinking phase might take 30-45 minutes. Drafting another 15-20. Slides 15-20. Total: about 60-90 minutes for a presentation that would otherwise take 3-4 hours — and the quality is dramatically higher because the thinking is sharper.

What if I’m not good at giving AI instructions?

That’s exactly what frameworks solve. When you know to ask for AVP structure or S.E.E. proof points, you don’t need to be a “prompt engineer.” The framework does the heavy lifting. AI-Enhanced Presentation Mastery includes ready-to-use prompts for every scenario.

Related: The thinking-first approach is especially powerful for recurring executive presentations. See Transformation Program Updates That Make Executives Want to Fund You for how to structure updates that build champions.

And if presentation anxiety is holding you back from presenting your AI-enhanced work confidently, read When Your Voice Cracks Mid-Sentence for recovery techniques that work.

That senior director who told me “AI presentations don’t work for executive content” was right about the symptom but wrong about the cause.

AI presentations don’t fail because AI is bad at presentations. They fail because most people use AI to skip the thinking — when thinking is exactly what AI does best.

Flip the workflow. Think first. Slides last.

Use AI as a strategic partner, not a production tool.

That’s the distinction that changes everything.

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years in corporate banking at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she has delivered high-stakes presentations in boardrooms across three continents.

A certified hypnotherapist and NLP practitioner, Mary Beth now pioneers AI-enhanced presentation mastery — combining strategic thinking with AI efficiency. She developed the AVP framework and 3Ps methodology, refined through years of executive presentation work in high-stakes banking and consulting environments.

Book a discovery call | View services

08 Feb 2026
Maven presentation courses at test pricing showing AI-Enhanced Mastery at £249 and Executive Buy-In System at £199 with savings up to £1,152

Two Executive Presentation Courses: One for Speed, One for Buy-In

Test pricing is temporary. This transparency isn’t.

When I launched these two Maven courses, I deliberately priced them low — not as a “launch discount” marketing gimmick, but to genuinely test demand while I was still building out the content. I wanted to know: would busy professionals actually invest in comprehensive presentation training?

The answer was yes. Resoundingly yes.

Which means the test pricing window is closing. And once it does, these courses will never be available at these prices again.

Here’s what’s about to change:

  • AI-Enhanced Presentation Mastery: Currently £249 → Rising to £399 (self-study) or £750 (live cohort)
  • Executive Buy-In Presentation System: Currently £199 → Rising to £499 (self-study) or £850 (live cohort)

That’s not marketing spin. The current prices represent 37-76% savings compared to what future students will pay. And the content is identical — built from 24 years in corporate banking and consulting, plus 14+ years training senior professionals globally.

Both courses have already started, which is actually better for you — more modules are immediately available, so you can start applying the frameworks this week rather than waiting for content to release.

Let me show you exactly what each course delivers.

Quick answer: If you spend too many hours building presentations and want to cut creation time in half using AI — choose AI-Enhanced Presentation Mastery (£249 now, £399-£750 later). If you struggle to get approvals and face stakeholder resistance — choose Executive Buy-In Presentation System (£199 now, £499-£850 later). If you want speed AND buy-in, the best value is both courses for £448 — less than the future self-study price of Executive Buy-In alone (£499).

Best Value: Get Both Courses

£448

Future value: £898 self-study | £1,600 live cohort — Save up to £1,152

Lock In Test Pricing →

Or scroll down to choose just one course

💰 The Numbers Don’t Lie: Test Pricing vs. Future Pricing

Course Test Price Self-Study Live Cohort You Save
AI-Enhanced Mastery £249 £399 £750 Up to £501
Executive Buy-In £199 £499 £850 Up to £651
BOTH COURSES £448 £898 £1,600 Up to £1,152

Test pricing includes lifetime access to all materials, live Q&A sessions, and future updates.

AI-Enhanced Presentation Mastery (£249)

The problem this solves: You’re spending 4-6 hours building presentations that should take 90 minutes. You’ve tried AI tools but end up with generic outputs that need complete rewrites. You know AI could help, but you haven’t found a system that actually works for executive-level content.

What you’ll learn:

This isn’t an AI tutorial. It’s a strategic system for using AI as a thinking partner — not a content generator.

  • The AVP Framework (Action-Value-Proof) — Structure presentations that are impossible to ignore. Create compelling outlines in minutes that guide audiences to yes.
  • The 132 Rule — Organize information in the exact sequence your audience’s brain processes and remembers it.
  • The S.E.E. Formula (Story-Evidence-Emotion) — Make your proof memorable and your recommendations impossible to dismiss.
  • Your Personal AI Playbook — Customised prompts that reflect your expertise and communication style. Create first drafts in 30 minutes.
  • Data Storytelling with AI — Transform KPIs and analytics into strategic narratives using the Insight-Implication-Action framework.

What’s included:

  • 8 self-paced modules (releasing January–April 2026)
  • 2 live 60-minute coaching sessions
  • AI-powered outline generators
  • 30+ prompt templates for different presentation types
  • Before/after slide transformations
  • Master Prompt Pack
  • Lifetime access to all materials and future updates

The practical result: You’ll cut presentation creation time by 50%+ while dramatically improving quality. One client used the AVP framework to rebuild a 47-slide deck into 12 focused slides — and got approval in the first meeting after three previous rejections.

AI-Enhanced Presentation Mastery

Test Price: £249

Future: £399 self-study | £750 live cohort

Lock In Test Pricing → £249

Modules already available. Start applying frameworks this week.

Executive Buy-In Presentation System (£199)

The problem this solves: You create solid presentations but struggle to get approval. Stakeholders push back. Decision-makers say “let me think about it” instead of “yes.” You know your recommendations are sound, but you can’t seem to get the room on your side.

What you’ll learn:

This is about influence, not information. You’ll learn the psychology of how decisions actually get made in organisations — and how to position yourself on the winning side.

  • The Champion Strategy — How to get someone fighting FOR your proposal before you even present. Pre-meeting tactics that make your presentation a formality.
  • The Objection Map — Find resistance before it finds you. Identify blockers, skeptics, and hidden agendas before you walk into the room.
  • Stakeholder Psychology — Why “alignment” fails and “enrollment” wins. The difference between people nodding and people actually supporting you.
  • The Pre-Decision Conversation — Where approvals actually happen (hint: it’s not in the presentation). How to have the conversations that matter.
  • Handling “Let Me Think About It” — Scripts and frameworks for converting hesitation into commitment.

What’s included:

  • Complete self-paced module library
  • Live Q&A coaching sessions
  • Stakeholder mapping templates
  • Pre-meeting preparation frameworks
  • Objection handling scripts
  • Decision architecture templates
  • Lifetime access to all materials and future updates

The practical result: You’ll stop being the person who presents and start being the person who gets things approved. One executive used the Champion Strategy to secure a £2M budget — the decision was essentially made before the formal presentation even started.

Executive Buy-In Presentation System

Test Price: £199

Future: £499 self-study | £850 live cohort

Lock In Test Pricing → £199

Modules already available. Start applying frameworks this week.

Is This the Right Presentation Skills Course for You?

Here’s the honest breakdown:

Choose AI-Enhanced Presentation Mastery (£249 — saves up to £501) if:

  • You spend too many hours building presentations
  • You want to use AI but haven’t found a system that works
  • You need to produce more presentations without sacrificing quality
  • You’re already decent at getting buy-in but want faster creation
  • Your main pain is time, not approval

Choose Executive Buy-In System (£199 — saves up to £651) if:

  • You create good presentations but struggle to get approval
  • You face resistance, skepticism, or “let me think about it”
  • You need to influence stakeholders without formal authority
  • Politics and hidden agendas derail your recommendations
  • Your main pain is approval, not creation time

Take both courses (£448 — saves up to £1,152) if:

  • You want the complete system — fast creation AND reliable approval
  • You’re at a career inflection point where presentations really matter
  • You recognise that £448 for both is less than the future self-study price of Executive Buy-In alone (£499)
  • You want to lock in lifetime access before prices triple

🚫 These courses are NOT for you if:

  • You’re looking for a quick PowerPoint tutorial (these are strategic frameworks, not software training)
  • You need presentation skills for academic or personal contexts (these are built for corporate/executive environments)
  • You want someone to build your slides for you (these teach you to build better, faster)
  • You’re not willing to invest 2-3 hours per week in learning and applying the frameworks

For more on executive presentation structure, see my guide on executive presentation structure. For AI presentation workflows, see AI presentation workflow. For stakeholder influence, see how to get executive buy-in.

Why Test Pricing Exists (And Why It’s Ending)

I want to be completely honest about why these prices exist — because understanding this helps you see why it’s genuinely a limited window.

I needed to validate demand. Before investing hundreds of hours building comprehensive courses, I needed to know: would busy executives actually pay for in-depth presentation training? Would the frameworks I’ve used for 24 years translate to a self-paced format?

So I priced both courses low enough to test the market while I built the content. Not “discounted” — genuinely priced to test.

The test worked. Students enrolled. They’re getting results. The feedback is shaping the final versions of both courses. But now the content is nearly complete, and there’s no longer a reason to keep prices at testing levels.

Here’s what you get at test pricing that future students won’t:

  • The same content — Identical frameworks, templates, and live sessions
  • Lifetime access — Including all future updates and improvements
  • Live Q&A sessions — Worth the price difference alone
  • Maven Guarantee — Full refund eligible up until halfway point
  • 37-76% lower price — Compared to what the exact same course will cost in 3 months

The maths is simple:

If you wait and buy AI-Enhanced Presentation Mastery at the future self-study price (£399), you’ll pay £150 more for exactly the same course. If you want the live cohort experience later, that’s £750 — three times today’s price.

If you wait and buy Executive Buy-In at the future self-study price (£499), you’ll pay £300 more. The live cohort? £850 — more than four times today’s price.

If you buy both now (£448), you pay less than the future self-study price of Executive Buy-In alone (£499). Here’s the simple price logic: test pricing exists to validate demand, not to be permanent.

Lock In Test Pricing Before It Disappears

AI-Enhanced Mastery

£249 £399-£750

Save up to £501

Lock In Test Pricing →

Executive Buy-In System

£199 £499-£850

Save up to £651

Lock In Test Pricing →

BOTH COURSES: £448 (Future value: £898-£1,600)

Lifetime access. Live Q&A sessions. Maven Guarantee.

Frequently Asked Questions

The courses have already started — am I too late?

The opposite. Because modules release over time, joining now means you get immediate access to everything that’s already available — more content ready to consume than early joiners had. You can catch up at your own pace, the live Q&A sessions are still ahead, and you’re paying the same test price. If anything, you’re getting better value than the earliest students.

Why are these prices so much lower than future pricing?

Honestly? I priced them low to test demand while building the courses. I needed to validate that busy professionals would invest in comprehensive presentation training before committing hundreds of hours to create it. The test worked — students enrolled and are getting results. Now that the content is nearly complete, there’s no reason to keep prices at testing levels. Future students will pay £399-£750 for AI-Enhanced and £499-£850 for Executive Buy-In.

What if I can’t attend the live sessions?

All live sessions are recorded and added to your course portal. You’ll have lifetime access to watch them whenever convenient. The courses are designed for busy professionals — self-paced learning with live sessions as a bonus, not a requirement.

Can my company reimburse the cost?

Yes — many employers cover professional development courses. Maven provides documentation and receipts suitable for expense claims. Both courses include certificates of completion you can share with your employer or add to LinkedIn. At test pricing, this is an easy approval — you’re essentially getting live-cohort-quality training at a fraction of typical corporate training costs.

Will test pricing return later?

No. Test pricing exists because I was validating demand while building the courses. Once the programmes are complete and established, they move to standard pricing: £399 (self-study) or £750 (live cohort) for AI-Enhanced, and £499 (self-study) or £850 (live cohort) for Executive Buy-In. This window is genuinely limited.

What’s the refund policy?

Both courses are backed by Maven’s satisfaction guarantee. You’re eligible for a full refund up until the halfway point of the course if it’s not what you expected. There’s no risk in trying — except the risk of waiting and paying 2-4x more later.

Your Next Step

Let me make this simple.

If you wait three months and buy these courses at regular pricing, you’ll pay £898 for self-study access to both — or £1,600 for live cohort access.

If you act now, you pay £448 for both. That’s less than the future self-study price of Executive Buy-In alone.

The content is identical. The frameworks took me 24 years to develop. The only difference is whether you lock in test pricing or pay 2-4x more later.

If your main pain is spending too many hours building presentations:
AI-Enhanced Presentation Mastery — £249 (future: £399-£750)

If your main pain is getting approval and buy-in:
Executive Buy-In Presentation System — £199 (future: £499-£850)

If you want the complete toolkit:
Both courses — £448 total (future: £898-£1,600)

These frameworks work. I’ve used them to train thousands of executives. You can start applying them this week. The only question is whether you’ll pay test prices or full prices for the same result.

⏰ Test Pricing Window Is Closing

Once these courses are fully established, prices rise to £399-£850 per course. Lock in test pricing now and save up to £1,152.

Best Value: Get Both Courses → £448

📧 Not Ready to Commit? Get the Newsletter First

Weekly insights on executive communication, presentation structure, and high-stakes delivery — free. See if my approach resonates before investing in a course.

Subscribe to The Winning Edge →

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years in corporate banking and consulting — including senior roles at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank — she has delivered high-stakes presentations in boardrooms across three continents.

A qualified clinical hypnotherapist and NLP practitioner, Mary Beth combines executive communication expertise with evidence-based techniques for influence and persuasion. She has trained thousands of executives and supported presentations that have secured significant funding and approvals.

Book a discovery call | View services

07 Feb 2026
Professional man at desk with laptop focused on high-impact AI presentation tasks

AI Presentation 80/20 Rule: What Actually Moves the Needle

I spent three months mastering every AI presentation tool. Then I realized I was optimizing the wrong things.

Like most people who discover AI for presentations, I went deep. Prompt engineering courses. Every Copilot feature. Claude, ChatGPT, Gamma, Beautiful.ai — I tested them all. I built elaborate workflows with multiple tools chained together.

My presentations got faster to create. But they didn’t get better. And the executives I was presenting to couldn’t tell the difference between my AI-optimized decks and the ones I’d built the old way.

That’s when I started tracking where AI actually moved the needle — and where I was just playing with shiny tools.

The Pareto Principle applies to AI presentations just like everything else: roughly 20% of AI applications deliver 80% of the value. The rest is optimization theatre.

This guide shows you where to focus.

Quick answer: The highest-impact uses of AI in presentations are: (1) structuring your argument before you touch slides, (2) pressure-testing your logic against likely objections, and (3) transforming dense content into clear, scannable formats. The lowest-impact uses — where most people spend their time — are generating slides from scratch, finding “the perfect prompt,” and automating visual design. Focus on thinking assistance, not production assistance.

⚡ Need to use AI effectively right now?

If you only have 30 minutes to improve your presentation with AI, do these three things:

  1. Ask AI to find holes in your argument. Paste your key points and ask: “What would a skeptical CFO challenge here?”
  2. Ask AI to simplify your densest slide. Paste the content and ask: “Rewrite this so a busy executive can absorb it in 10 seconds.”
  3. Ask AI for your opening line. Describe your audience and goal, then ask: “Give me 5 opening sentences that would make this audience lean in.”

These three uses take 30 minutes total and improve your presentation more than hours of prompt engineering.

📋 Copy/Paste These 3 High-Impact Prompts:

PROMPT 1: Find holes

I need to convince [AUDIENCE] to [ACTION]. Here are my key points: [PASTE POINTS]. What would a skeptical executive challenge? What’s the weakest part of this argument?

PROMPT 2: Simplify

Here’s my densest slide: [PASTE CONTENT]. Rewrite this so a busy executive can absorb it in 10 seconds. Maximum 3 bullet points, 8 words each.

PROMPT 3: Opening options

I’m presenting to [AUDIENCE] about [TOPIC]. My goal is [OUTCOME]. Give me 5 opening sentences that would make this audience lean in. Range from conservative to bold.

The High-Impact 20% (Where AI Actually Helps)

After tracking my own AI usage — and observing how executives I train actually benefit from these tools — I’ve identified five high-impact applications. These are where AI genuinely improves outcomes, not just speeds up production.

1. Structuring your argument BEFORE slides

This is the single highest-value use of AI in presentations. Before you open PowerPoint, before you think about design, use AI to pressure-test your structure.

The prompt that works: “I need to convince [audience] to [action]. Here’s my current thinking: [your key points]. What’s the most persuasive order for these points? What’s missing? What would make a skeptic say no?”

Why it matters: Most weak presentations fail at the structure level, not the slide level. Getting your argument right first means everything downstream improves. AI is genuinely good at identifying logical gaps and suggesting better sequences.

2. Pressure-testing against objections

AI can simulate a hostile audience faster than you can anticipate objections yourself. This is where the technology excels — generating variations and edge cases.

The prompt that works: “You are a skeptical [CFO/board member/client]. Here’s the presentation I’m about to give you: [paste your structure or key points]. What questions would you ask? What would make you say no? What’s the weakest part of this argument?”

Why it matters: The questions that derail presentations are usually predictable. AI helps you find them before the room does.

3. Transforming dense content into clear formats

If you have a wall of text, a complex data set, or a technical explanation that needs to become executive-friendly, AI does this transformation well.

The prompt that works: “Here’s [technical content/data/dense text]. Transform this into [a 3-point executive summary / a comparison table / a timeline / a decision tree]. A busy executive should be able to absorb this in [10 seconds / one glance].”

Why it matters: This is genuine cognitive work that AI handles well — restructuring information for a different audience. It saves time AND improves clarity.

4. Generating opening and closing options

The first 30 seconds and last 30 seconds of a presentation carry disproportionate weight. AI can generate multiple options quickly, letting you pick and refine rather than starting from scratch.

The prompt that works: “I’m presenting to [audience] about [topic]. My goal is [specific outcome]. Give me 5 different opening lines that would make this audience want to keep listening. Range from conservative to bold.”

Why it matters: Most people default to their first idea for openings. Having options improves the final choice significantly.

5. Creating speaker notes and talking points

Once your slides are structured, AI can help you prepare what to actually say — creating natural talking points that expand on slide content without reading it verbatim.

The prompt that works: “Here’s my slide: [paste content]. Write speaker notes that: expand on the key point without repeating the slide text, include one concrete example, and transition naturally to [next topic].”

Why it matters: Good speaker notes are tedious to write. AI handles this well, and strong notes dramatically improve delivery.

For more on effective AI workflows, see my guide on AI presentation workflow.

Master the AI Techniques That Actually Matter

AI-Enhanced Presentation Mastery focuses on the high-impact 20% — the specific prompts, workflows, and techniques that improve presentation outcomes, not just production speed. Self-paced modules with live Q&A calls.

Join AI-Enhanced Presentation Mastery →

Join anytime — get instant access to all released modules.

The Low-Impact 80% (Where Most People Waste Time)

These are the AI applications that feel productive but don’t meaningfully improve your presentations. Most people spend most of their AI time here.

1. Generating slides from scratch

This is where everyone starts — and where AI consistently disappoints. “Create a presentation about Q3 results” produces generic slides that require so much editing you’d have been faster starting manually.

Why it’s low-impact: AI doesn’t know your audience, your politics, your specific situation. Generated slides are starting points at best, and often worse than templates you already have.

2. Obsessing over “the perfect prompt”

Prompt engineering has become its own hobby. People spend hours refining prompts to get slightly better outputs, when the real issue is what they’re asking AI to do in the first place.

Why it’s low-impact: A mediocre prompt for a high-value task beats a perfect prompt for a low-value task. Focus on WHAT you’re asking, not HOW you’re asking it.

3. Automating visual design

AI can suggest layouts, generate images, and format slides. But design that impresses other people rarely impresses executives. They care about clarity, not aesthetics.

Why it’s low-impact: Visual polish is the last 5% of presentation effectiveness. Getting it perfect while your argument is weak is optimization theatre.

4. Building elaborate multi-tool workflows

Using ChatGPT for structure, then Claude for refinement, then Copilot for formatting, then Midjourney for images… these workflows are intellectually satisfying but time-consuming.

Why it’s low-impact: The productivity gains from tool-chaining rarely exceed the time spent building and maintaining the workflow. Simple beats complex.

5. Generating content you should be thinking through

AI can write your executive summary, your recommendation, your conclusion. But if you’re outsourcing the thinking, you’re outsourcing the value.

Why it’s low-impact: The presentations that get approvals contain thinking that couldn’t have come from a generic AI. Your judgment, your context, your insight — that’s what matters.

For more on avoiding generic AI output, see my guide on why AI-generated slides look generic.

The AI Presentation Matrix

Here’s how to think about where AI fits in your presentation workflow:

The AI Presentation 80/20 Matrix showing high-impact versus low-impact AI use cases

High Impact + Low Time Investment (DO FIRST)

  • Structure pressure-testing
  • Objection anticipation
  • Opening/closing generation
  • Content simplification

High Impact + High Time Investment (DO SELECTIVELY)

  • Speaker notes for complex presentations
  • Data visualization suggestions
  • Audience-specific customization

Low Impact + Low Time Investment (SKIP OR AUTOMATE)

  • Basic formatting
  • Spell/grammar checking
  • Simple template application

Low Impact + High Time Investment (AVOID)

  • Full slide generation
  • Complex prompt optimization
  • Multi-tool workflows
  • AI-generated visuals for executive audiences

For a complete AI presentation approach, see my guide on how to make a presentation with AI.

The Focused Workflow

Here’s the AI workflow I now use — and teach — that focuses only on high-impact applications:

Step 1: Clarify before you create (15 minutes)

Before touching any tool, answer these questions (use AI to help if needed):

  • What decision am I asking for?
  • What does this audience already believe?
  • What would make them say no?
  • What’s the one thing they must remember?

Step 2: Structure with AI assistance (20 minutes)

Use AI to pressure-test your argument structure. Share your key points. Ask for logical gaps. Ask for better sequencing. Ask what a skeptic would challenge.

Output: A clear outline with your argument in the right order.

Step 3: Build slides manually (your normal process)

Yes, manually. Your existing process for creating slides is probably fine. The structure work you did in Step 2 is what matters. Don’t let AI slow you down with generated slides you’ll need to heavily edit anyway.

Step 4: AI refinement on specific elements (15 minutes)

Use AI surgically:

  • Simplify your densest slide
  • Generate 5 opening options
  • Create speaker notes for your 3 most complex slides
  • Anticipate questions for your Q&A

Step 5: Human review (always)

Every AI output gets human review. Check for: accuracy, tone match, context appropriateness, anything that sounds generic or could have come from anyone.

Total AI time: ~50 minutes, focused entirely on high-impact applications.

Learn the Focused AI Approach

AI-Enhanced Presentation Mastery teaches you exactly where AI helps and where it doesn’t — with specific prompts, real examples, and the workflow that senior professionals actually use. No fluff, no tool obsession, just results.

Join AI-Enhanced Presentation Mastery →

Self-paced learning with live Q&A calls. Join anytime.

Frequently Asked Questions

Isn’t using AI for slides the whole point?

It’s the obvious application, but not the valuable one. AI-generated slides require so much human editing that the time savings are minimal. The real value is using AI for thinking assistance — pressure-testing arguments, anticipating objections, simplifying complex content. These improve your presentation regardless of how you build the slides.

What about Copilot in PowerPoint — isn’t that high-impact?

Copilot is useful for specific tasks: reformatting existing content, suggesting layouts, generating speaker notes. It’s not useful for creating presentations from scratch. Think of it as an assistant for production tasks, not a replacement for thinking. Use it selectively, not comprehensively.

How do I know if I’m wasting time on low-impact AI use?

Ask yourself: “Is this helping me think more clearly, or just produce faster?” If you’re spending time refining prompts, chaining tools, or generating content you’ll heavily edit anyway, you’re in the low-impact zone. If AI is helping you see gaps in your logic or simplify your message, you’re in the high-impact zone.

Should I use multiple AI tools or just one?

One tool, used well, beats three tools used superficially. Pick the AI you’re most comfortable with (ChatGPT, Claude, Copilot) and learn to use it effectively for the high-impact applications. Tool-switching creates friction that usually exceeds any capability gains.

Your Next Step

The 80/20 rule works for AI presentations just like everything else. Most of the value comes from a small number of applications — and most of the time waste comes from chasing the wrong optimizations.

Focus on structure, objection-testing, and content transformation. Skip the elaborate workflows and slide generation. Use AI as a thinking partner, not a production tool.

That’s where the needle actually moves.

Ready to master AI presentations the right way?

Join AI-Enhanced Presentation Mastery →

📧 Get the Winning Edge Newsletter

Weekly insights on AI-enhanced presentations, executive communication, and high-stakes delivery — practical techniques you can use immediately.

Subscribe free →

Related reading: One of the highest-stakes presentations you might face is a restructuring announcement. Read Restructuring Announcement Presentation: What HR Won’t Tell You for the structure that preserves trust when delivering difficult news — an example where human judgment matters more than AI assistance.

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she has seen firsthand which presentation approaches actually influence executive decisions — and which are optimization theatre.

She now teaches senior professionals how to use AI tools strategically, focusing on the applications that improve outcomes rather than just production speed.