Tag: senior leaders

06 May 2026
Senior leaders waste hours on generic Copilot output. Three specific prompts turn Copilot into a genuine board-presentation partner. Here is how.

Copilot PowerPoint for Board Presentations: The 3 Prompts That Work

QUICK ANSWER

Most senior leaders use Copilot to ask for a complete board presentation. That is why the output reads generic. Three specific prompts, used in the right order, turn Copilot into a genuine board-presentation partner: a stakeholder-mapped opening, a decision-framed middle, and a predicted-question close. Each prompt assumes the strategic work is yours. Copilot drafts the structure so you can spend your time on judgement, not formatting.

If you want the structured approach behind these prompts

The AI-Enhanced Presentation Mastery course from Maven is a self-paced programme covering the prompt and workflow patterns that take Copilot from drafting tool to presentation partner.

Explore the Programme →

Ngozi, a regional operations director at a biotech company, rebuilt the same board deck four times in one afternoon. She had used Copilot to generate the first draft — a 12-slide update for the quarterly operations review. The output looked polished. The sections were logical. The language was professional. But when she read it back, it could have belonged to any company, in any industry, at any quarter. Her board would read three slides and switch off.

She opened a blank prompt window and tried again. “Build a board deck covering Q1 operations performance.” Same result. Slight variations in headings. Same generic feel. By the third attempt she had realised something that changes how senior leaders should use Copilot for presentations: the AI is not the problem. The prompt is asking the AI to do strategic work that only the presenter can do.

The professionals who get genuinely useful Copilot output for board presentations do something different. They do the strategic thinking first, then use Copilot to draft the structure their thinking requires. Three specific prompts, used in the right order, make this work. Each assumes that the judgement is yours and the drafting is Copilot’s.

Why most Copilot board decks read generic

Copilot is a drafting tool. It is very good at producing coherent text that matches patterns it has seen before. It is not good at knowing which board member will block your proposal, what the finance director is quietly worried about, or why this particular quarter matters differently from the last three. These are strategic inputs only the presenter has.

When senior leaders prompt Copilot with “build a board deck on X” the AI has nothing to work with except pattern-matching. It produces the average of every board deck it has ever seen. Average board decks are unmemorable. They earn polite acknowledgement and no action.

The shift is to stop asking Copilot for decks and start asking Copilot for specific structural work. The three prompts below do that. Each names exactly what structural output is needed. Each supplies the strategic context Copilot cannot guess. Each produces drafts that feel tailored because they are.

Three-prompt framework for using Copilot on board presentations: stakeholder-mapped opening, decision-framed middle, predicted-question close

WHEN COPILOT HAS TO HOLD UP IN A BOARDROOM

Move beyond basic AI usage to executive-grade output

The AI-Enhanced Presentation Mastery course is a self-paced programme with 8 modules and 83 lessons on using AI (including Copilot) to structure, draft, and refine presentations that hold up at senior levels. 2 optional live coaching sessions with Mary Beth, fully recorded — watch back anytime. Monthly cohort enrolment; lifetime access to materials.

  • 8 modules, 83 lessons on AI-assisted executive presentation work
  • Prompt and workflow patterns for Copilot and ChatGPT, board-level output
  • 2 optional live coaching sessions with Mary Beth (recorded)
  • Self-paced, no deadlines, no mandatory live attendance
  • Monthly cohort enrolment — enrol any time

£499, lifetime access to all course materials.

Explore AI-Enhanced Presentation Mastery →

Designed for senior professionals who need AI to produce executive-grade output, not generic drafts.

Prompt 1: The stakeholder-mapped opening

The opening of a board presentation carries more weight than the middle. Board members decide in the first two or three slides whether to lean in or let their attention drift. The opening has to land for the specific people in the room, not for boards in general.

Before you prompt Copilot, write down three facts:

  • Which board member matters most on this topic — who will either support or block the decision?
  • What that person is quietly worried about before the meeting (risk, cost, reputation, precedent)
  • What they need to see in the first two slides for you to have their attention for the rest

Now the prompt:

“I am presenting to a board where the most influential decision-maker on this topic is [role]. Their primary concern before this meeting is [specific worry]. I need a two-slide opening that addresses their concern in the first 60 seconds, without burying the answer. Draft Slide 1 (the one-sentence answer to the implied question they’re bringing into the room) and Slide 2 (three supporting points that map to their concern). No preamble, no company-of-the-future language.”

Copilot produces an opening grounded in a real person’s real concern. That is different from every generic board-opener it would otherwise draft. You will still edit the output. But the draft will have a centre of gravity to edit around.

Prompt 2: The decision-framed middle

The middle of a board deck is where most presentations drift. Slide after slide of context, data, background. By the time the presenter arrives at the ask, the board has spent its attention on material that was the journey, not the answer. Board members rarely say this out loud. They just disengage.

A decision-framed middle does the opposite. Every slide exists because it supports a specific decision the board is about to make. Slides that do not serve that decision get cut or moved to an appendix.

The prompt:

“The decision the board is making is: [specific decision]. Assume they already know [common background you would otherwise over-explain]. Build a 4-slide middle that (1) names the decision in one sentence at the top of Slide 1, (2) shows the two realistic options the board can choose between, (3) gives the supporting evidence for the recommended option, and (4) addresses the strongest argument against. Each slide must directly serve the decision. No context slides, no history, no company-values language.”

The output will be tighter than a generic Copilot draft because the prompt has told Copilot what to leave out, not just what to include. The discipline of naming the decision forces Copilot to cut the padding that would otherwise fill the deck. If you want an overview of where this fits in the broader AI-for-presentations landscape, ChatGPT for PowerPoint presentations covers the parallel approach for non-Microsoft environments.

Before and after comparison of Copilot board deck drafts showing how strategic context in the prompt changes the output quality

Prompt 3: The predicted-question close

The close of a board presentation is the slide you land on before the Q&A begins. Most closes are either a generic “Thank you, questions?” slide or a summary of everything already covered. Both waste the moment. The slide the board is looking at when the first question comes is the slide that shapes the first question.

A predicted-question close shows the board the three questions you are ready to answer. That does two things at once. It frames the Q&A around the questions you want. And it signals preparation — the board member about to ask a harder question will often reframe it because your visible preparedness has raised the bar.

The prompt:

“The three hardest questions the board will ask about [specific proposal] are likely to be [Q1], [Q2], [Q3]. Draft a single closing slide that lists all three as bullet points with a one-sentence direct answer under each. Professional tone, no defensive language, no hedging. The purpose of the slide is to show readiness, not to answer in full — each answer should invite a conversation, not close it down.”

The closing slide produced by this prompt does something unusual. It leaves the board with the impression that you have already thought through the hard parts. That is the impression most senior leaders want and rarely manage to create. It also makes the Q&A shorter and more focused, which every board member quietly appreciates.

Want the prompts ready to use?

The Executive Prompt Pack contains 71 ChatGPT and Copilot prompts for PowerPoint presentations — including board-level prompts, stakeholder-mapped openings, and decision-framed middle sections. £19.99, instant download.

Get the Executive Prompt Pack →

How to sequence the prompts

The three prompts are designed to be used in order. Opening first, because the opening sets what the rest of the deck has to support. Middle second, because the middle adapts to the opening you have committed to. Close third, because the close has to match the questions the opening and middle will provoke.

Running them in any other order usually produces a deck that feels stitched together. Running them in order produces a deck that feels coherent, even when each prompt runs in a separate Copilot session. Senior leaders who use this sequence regularly report that the total time from blank deck to editable first draft drops from two or three hours to around 25 minutes — and the draft is actually worth editing.

One more thing. Copilot’s output still needs an editorial pass. The prompts give you a draft with a real centre of gravity. They do not give you a final deck. The best Copilot PowerPoint prompts and the editing workflow that cleans up the output work together. Neither replaces the other.

The three prompts also apply when you are using Copilot to refine an existing deck, not to build from scratch. Run the opening prompt against the first two slides you already have. The gap between the current opening and the stakeholder-mapped version is usually where the board was losing attention. Fix that first.

Frequently asked questions

Do these prompts work with ChatGPT as well as Copilot?

Yes. The structural logic is the same. ChatGPT and Copilot will produce slightly different drafts because their training and defaults differ, but the prompts give both models the strategic context they need. If you are comparing the two tools for executive slide work, Copilot vs ChatGPT for executive slides covers the differences in detail.

How long should it take to prepare the strategic inputs before prompting?

Around 15 to 20 minutes for most board presentations. That feels slow the first time, but it replaces one to two hours of generic output and rework. The strategic inputs are the same work the presenter would have had to do anyway — the prompts just make the thinking explicit up front.

What if I do not know who the most influential board member on the topic is?

Ask one of your peers or your sponsor. Board influence is rarely what the org chart suggests. The influential member on a cost decision is usually not the one who dominates strategy discussions. If the topic is genuinely novel, the most influential person is whoever has asked the sharpest questions at the last two meetings on adjacent topics.

Should I tell the board I used Copilot to draft the deck?

No, and the question itself points to a worry worth examining. Copilot is a drafting tool, the same way Word is a typing tool. The value you bring is the strategic thinking, the editorial judgement, and the delivery. Leading with “I used AI” tends to shift attention from the decision to the tool, which is not what board time is for.

Do these prompts apply to investor presentations as well as board presentations?

Partially. The stakeholder-mapped opening and the predicted-question close translate cleanly. The decision-framed middle needs adapting because investor presentations often have a different centre of gravity — investment thesis rather than operating decision. The structural discipline still helps.

The Winning Edge

Weekly thinking for senior professionals on executive presentation craft — slide structure, Q&A, delivery, AI, and the judgement calls the frameworks do not cover. Thursday mornings, one considered issue.

Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Pyramid Principle Template — the structure most board slides fail to use, in a one-page reference.

Next step: pick one upcoming board presentation. Run the stakeholder-mapped opening prompt this week. See whether the draft lands differently from your usual Copilot output. That one change tends to be the one that reveals the rest.

For the parallel comparison between Copilot and ChatGPT on executive slide work, see Copilot vs ChatGPT for executive slides. For what happens when Copilot’s first draft does not hold up under boardroom scrutiny, see why Copilot’s first draft fails boardroom tests.


About the author

Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, a UK company founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises senior professionals across financial services, healthcare, technology, and government on structuring presentations for high-stakes decisions, board approvals, and executive scrutiny.

06 May 2026
Senior leaders want to know which AI writes boardroom-ready content. The real answer turns on workflow, not output quality. Here is how they differ.

Copilot vs ChatGPT for Executive Slides: What Actually Differs

QUICK ANSWER

For executive slides, neither Copilot nor ChatGPT produces consistently better content. The real difference is workflow. Copilot wins when your deck lives inside Microsoft 365 and you need slide-level editing without context-switching. ChatGPT wins when you want deeper reasoning passes before building slides. Most senior leaders end up using both — ChatGPT for the thinking, Copilot for the drafting — and the decision is about which one carries the most value at your stage of preparation, not which one is smarter.

If you want the framework that makes both tools genuinely useful

The AI-Enhanced Presentation Mastery course is a self-paced Maven programme covering how to use AI (including both Copilot and ChatGPT) for executive-grade presentation work.

Explore the Programme →

Rafaela, a chief of staff at a mid-sized insurance group, told me last month she had run the same executive brief through both Copilot and ChatGPT and could not tell which was which. Her frustration was not that the output was bad. It was that both looked competent and neither felt right. She wanted a clear answer to a clear question — which one is better for executive slides — and she was not getting one.

She is not alone. Senior leaders across financial services, pharma, and tech keep asking the same version of this question. Part of what makes it hard to answer is that the honest response is “it depends on your workflow, not on the model.” That is an unsatisfying answer, so people keep looking for a cleaner one. There is not one. But there is a useful structure for thinking about when each tool earns its place in executive preparation.

The question senior leaders are actually asking

Under the surface question — which tool writes better boardroom-ready content — there is usually a more specific question. Senior leaders are trying to decide whether to pay for a ChatGPT subscription when their company already provides Copilot. They are trying to work out whether switching tools mid-workflow costs them more time than it saves. They are wondering if choosing the “wrong” AI will make their slides worse.

The honest answer to each of those questions is the same. Output quality between Copilot and ChatGPT on executive presentation work, holding the prompt constant, is close enough that it stops being the deciding factor. What differs is the surrounding workflow: where the tool sits, what it connects to, and what friction it removes or adds as you move from strategic thinking to slide drafting.

Once you stop comparing on output quality and start comparing on workflow fit, the choice gets simpler. So does the decision to use both.

Side-by-side comparison of Copilot and ChatGPT workflow strengths for executive slides

BEYOND “WHICH TOOL IS BETTER”

Learn the prompt and workflow framework that turns AI into a presentation partner

AI-Enhanced Presentation Mastery is a self-paced Maven programme — 8 modules, 83 lessons covering prompt design, Copilot and ChatGPT workflows, and the editorial judgement that separates usable output from generic AI drafts. 2 optional live coaching sessions, fully recorded. Monthly cohort enrolment; lifetime access.

  • 8 modules, 83 lessons — self-paced
  • Prompt patterns that work across Copilot and ChatGPT
  • Workflow templates for executive slide preparation
  • 2 optional recorded coaching sessions with Mary Beth
  • Lifetime access to materials

£499, lifetime access to all course materials.

Explore AI-Enhanced Presentation Mastery →

Designed for senior professionals using AI to produce executive-grade presentations.

Where Copilot wins for executive slides

Copilot’s natural advantage is context. It lives inside PowerPoint, reads the slides you are already building, and can operate on them directly. When the question is “rewrite this title slide to be punchier” or “turn these three bullets into a two-sentence summary in the same tone as slide 4”, Copilot does not need the context explained. It has it. ChatGPT would require copy-paste in both directions.

That matters more than it sounds. Senior leaders editing executive decks at the detail level make hundreds of small adjustments. Every context-switch — copy the slide, paste into ChatGPT, edit prompt, copy output, paste back — costs attention. Multiply by thirty adjustments and the workflow friction becomes the dominant cost. Copilot in PowerPoint removes that friction.

Copilot also wins when the deck draws on internal documents or email threads. If your proposal references last quarter’s board minutes, an earlier project brief, and a recent executive memo, Copilot (with tenant-level permissions) can pull from those directly. ChatGPT cannot, unless you paste the relevant content in.

Where Copilot’s natural advantage ends is in deeper reasoning. Copilot is tuned for task completion within Microsoft 365, which means it tends to produce shorter, more tactical responses. For “help me think through the argument structure” work, it is less useful than ChatGPT.

Where ChatGPT wins for executive slides

ChatGPT’s natural advantage is depth of reasoning in a single conversation. For the strategic thinking that has to happen before you start building slides — what is the actual argument, who is the audience, what counter-arguments need addressing, what is the strongest one-sentence answer — ChatGPT is usually the better environment. You can run several iterations of thinking, push back, add new constraints, and work through to a structured answer before you open PowerPoint.

It also wins when you want to explore multiple framings of the same idea. “Give me three different ways to open this proposal” produces more varied output on ChatGPT than on Copilot, which tends to converge quickly on a single patterned response.

For the predicted-question close of a board deck — anticipating the hardest questions and drafting concise answers to each — ChatGPT’s longer reasoning window means it can hold the full context of the argument while generating the Q&A material. Copilot, working slide by slide, loses that context between turns. For the underlying approach see Copilot PowerPoint for board presentations, which covers the three-prompt framework that makes either tool more useful.

Where ChatGPT ends is in operational tasks. “Apply this design change to every slide in the deck” is not ChatGPT’s work. That is Copilot’s.

Dashboard showing executive AI workflow stages: thinking, structuring, drafting, editing, and which tool fits each stage

Is the output quality genuinely different?

This is where most comparison articles fall apart. They run the same prompt through both tools, compare the output, and declare a winner. The test is misleading because it holds the prompt constant but ignores workflow. A prompt that is optimal for Copilot (slide-level, context-aware, short) is not optimal for ChatGPT (multi-turn, reasoning-rich, longer). The reverse is also true.

When you prompt each tool in the way that suits it, the output on executive presentation work is close. There are tonal differences — Copilot tends toward corporate and compact; ChatGPT tends toward considered and longer — and those differences matter for taste more than they matter for quality. Neither produces a finished executive deck from a generic prompt. Both produce useful drafts when prompted with the strategic context the presenter supplies.

The useful question is not “which one is better?” It is “which one removes friction at the stage of preparation I am currently in?” Strategic thinking stage — ChatGPT. Slide-level drafting and editing stage — Copilot. Most executive decks benefit from both.

Ready-made prompts for both tools

The Executive Prompt Pack contains 71 ChatGPT and Copilot prompts for PowerPoint work — including strategic-thinking prompts for ChatGPT and slide-level operational prompts for Copilot. £19.99, instant download.

Get the Executive Prompt Pack →

How to use both without duplicating effort

The senior leaders who get the most from both tools run a simple two-stage workflow. Thinking in ChatGPT first. Drafting and editing in Copilot second. The stages rarely overlap. When they do, the result is usually worse than using one tool cleanly.

Stage one: open ChatGPT. Work out the argument. What is the one-sentence answer? Who is the most influential decision-maker and what is their quiet concern? What are the two realistic options the audience is choosing between? What is the strongest argument against the recommended option? What are the three hardest questions?

Stage two: open PowerPoint with Copilot active. Start building. Feed Copilot the output from stage one as slide-level prompts. Let Copilot draft titles, bullets, and summaries. Edit directly on the slides. Use Copilot for design-level adjustments and cross-slide consistency.

The handoff from stage one to stage two takes about a minute. The total time from blank deck to editable first draft usually drops to 30 to 40 minutes for a 10 to 12 slide board update. That is with both tools doing the work each is suited for. It compares well to the two to three hours most senior leaders spend when using a single tool for everything.

For the full landscape on executive AI presentation work see ChatGPT for PowerPoint presentations. For the editing pass that cleans up AI drafts before they reach a board, see the best Copilot PowerPoint prompts.

Frequently asked questions

Is Copilot included free with Microsoft 365?

Microsoft 365 Copilot is a paid add-on for most business tiers. Your organisation may or may not have provided access. If you already have it, start there — the integration advantages are real and there is no extra cost. If you do not, a ChatGPT subscription is usually the quicker path to improved executive presentation work because it does not require enterprise procurement.

Can I use ChatGPT plugins to edit PowerPoint directly?

Not in the same way Copilot does. Some ChatGPT integrations can generate a draft deck, but they do not read and operate on slides you are already building. For slide-level editing inside an existing deck, Copilot remains the more practical option in the Microsoft environment.

Does it matter which tool I use for the Q&A preparation?

Slightly. ChatGPT tends to produce more considered and varied possible questions because it holds the argument context over a longer conversation. Copilot produces tighter, more operational Q&A material. For hostile or complex board Q&A, ChatGPT is often the better starting point. For straightforward operational updates, either works.

Is it safe to paste confidential board material into ChatGPT?

Check your organisation’s AI policy first. Many organisations have approved Copilot because it runs within their Microsoft 365 tenant and keeps data inside the boundary. The same organisations often prohibit pasting confidential material into consumer ChatGPT. ChatGPT Enterprise or Team tiers address this concern but require an account at the organisational level.

Will this preference change as the models improve?

The integration advantages of Copilot and the reasoning advantages of ChatGPT are structural to where each tool sits. Model improvements will narrow the output-quality gap further, which makes workflow fit the dominant factor rather than the secondary one.

The Winning Edge

Weekly thinking for senior professionals on executive presentation craft — slide structure, Q&A, delivery, AI, and the judgement calls the frameworks do not cover.

Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Pyramid Principle Template — the argument structure both Copilot and ChatGPT draft better output against.

Next step: pick the next executive deck on your calendar. Do the first 20 minutes of thinking in ChatGPT. Then open PowerPoint with Copilot and draft from that thinking. Notice whether the handoff felt cleaner than your usual single-tool workflow. The answer is usually yes.

For a related deep-dive on what to do when Copilot’s first draft does not hold up under boardroom scrutiny, see why Copilot’s first draft fails boardroom tests.


About the author

Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, a UK company founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises senior professionals across financial services, healthcare, technology, and government on structuring presentations for high-stakes decisions.

06 May 2026
Copilot's first draft feels polished but falls apart under boardroom scrutiny. Here is exactly what goes wrong and the editing pass that repairs it.

Why Copilot’s First Draft Fails Boardroom Tests (And the Editing Pass That Fixes It)

QUICK ANSWER

Copilot’s first draft of a board deck usually fails on four specific dimensions: the opening buries the answer, the middle over-explains context, the recommendation lacks a defended position, and the close invites vague Q&A instead of framing it. The fix is an editing pass with a specific order — answer first, cut context second, commit to a position third, frame the questions fourth. The editing pass takes around 30 minutes and is usually the difference between a deck that lands and one that earns polite nods.

For the full editing framework

The AI-Enhanced Presentation Mastery programme covers the full editorial pass that turns AI drafts into executive-grade material — from opening structure to Q&A preparation.

Explore the Programme →

Tomás, the commercial director at a European logistics business, sent me a Copilot-generated board deck the evening before his quarterly review. He was nervous. The deck looked professional. The sections were reasonable. The language was competent. But when I read it back from the perspective of a board member at 2pm after four other agenda items, the problem was obvious. The deck answered no specific question, took no defended position, and gave the board nothing to decide. It felt like a very well-dressed placeholder.

We spent 35 minutes editing it together. The deck that arrived at the meeting the next morning was the same length. The data was the same. The design was the same. What changed was the centre of gravity. The opening answered a question the board was actually asking. The middle cut the context that was not serving the decision. The recommendation committed. The close told the board what Tomás was ready to discuss. The decision went his way.

Copilot’s first drafts fail boardroom tests for predictable reasons. They are not bad drafts. They are first drafts that have not yet been edited with boardroom judgement applied. The four failures below are the ones that appear in almost every AI-generated executive deck. Each has a specific repair.

Failure 1: The opening buries the answer

Copilot’s default is to build toward the answer. The draft begins with context, moves through background, arrives at supporting data, and eventually reveals the recommendation three or four slides in. This is how students are taught to write essays. It is not how board presentations work.

Board members are not reading an essay. They are deciding whether to engage. The first two slides are where they decide. If the answer is not in those slides, the board mentally files the presentation as “update” rather than “decision”, and attention shifts elsewhere. The deck can still be delivered successfully — but the board is no longer leaning in.

The repair is to move the answer to Slide 1. One sentence. “We recommend investing £X in initiative Y to achieve Z by Q3.” Slide 2 is the three supporting points that justify the recommendation. Everything else becomes the evidence. This is the Pyramid Principle, and Copilot does not apply it by default because most text in its training data does not follow it. You have to apply it in the edit.

Failure 2: The middle over-explains context

Copilot writes thoroughly. For most uses that is a strength. For board decks it is a liability. The middle of a Copilot draft usually contains two to four slides of context that the board already knows — market overview, business background, historical performance — and that would otherwise be handled in two lines of the opening.

The test for every middle slide is: does this slide directly serve the decision the board is about to make? If not, it goes into the appendix or gets cut. Most Copilot drafts have at least one “journey of the quarter” slide that tells the board what happened in the sequence it happened. The board does not need this. They need to know what the presenter learned and what it means for the decision.

In the edit, read every middle slide and ask one question. “If I cut this slide, does the board’s decision get worse?” If no, cut it. The usual outcome is that a 12-slide Copilot draft becomes an 8-slide deck. The 8-slide version lands harder.

Four boardroom failures of Copilot first drafts: buried answer, over-explained context, undefended recommendation, vague close

TURN AI DRAFTS INTO BOARDROOM MATERIAL

The editing framework senior professionals use to make AI output executive-ready

AI-Enhanced Presentation Mastery is a self-paced Maven programme. 8 modules, 83 lessons covering prompt design, editorial passes, and the workflow that turns AI drafts into decks that hold up under senior scrutiny. 2 optional live coaching sessions with Mary Beth, fully recorded. Monthly cohort enrolment; lifetime access.

  • 8 modules, 83 lessons — self-paced
  • Editing frameworks for AI-generated executive drafts
  • Prompt patterns that reduce rework
  • 2 optional live coaching sessions (recorded)
  • Lifetime access to all materials

£499, lifetime access.

Explore AI-Enhanced Presentation Mastery →

Designed for senior professionals producing AI-assisted executive presentations.

Failure 3: The recommendation lacks a defended position

This is the most common failure and the hardest one to spot. Copilot drafts tend to present options. “Option A, Option B, Option C, each with these tradeoffs.” The board reads this and sees neutrality. Neutrality, in a board setting, gets interpreted as the presenter not having a view — or worse, not being willing to commit to one. Both readings cost credibility.

A defended position names the preferred option and explains why it is preferred — including the strongest argument against it and why that argument is not decisive. This is not the same as removing the other options. The options can still appear, usually in a single slide. But the recommendation slide names one and defends it.

In the edit, find the recommendation slide. Ask: “If a board member asked me ‘which option do you actually want?’ — is the answer unambiguous on this slide?” If not, rewrite until it is. Then add one sentence on the strongest counter-argument and why the recommendation still holds. Board members trust presenters who have already reasoned through the objection, because it signals they have done the work.

Failure 4: The close invites vague Q&A

Most Copilot drafts end with a “Thank you. Questions?” slide or a summary of everything that was said. Both are wasted. The slide on screen at the moment the Q&A opens is the slide that shapes the first question. A blank thank-you slide produces whatever question the board members happen to have first. A summary slide produces questions about what has already been covered.

A close that frames the Q&A does something different. It lists the three questions the presenter is ready to answer — the hardest questions about the proposal. This earns attention for two reasons. It signals that the presenter has anticipated the difficult parts. And it implicitly invites the board to ask one of those questions, which means the conversation stays on the terrain the presenter has prepared.

In the edit, replace the “Questions?” slide with a three-question framing slide. Draft the questions honestly — what are the hardest things the board could ask about this proposal? — and list them with one-sentence direct answers. This is not a script for the Q&A. It is a scaffold.

Ready-made editing prompts for AI drafts

The Executive Prompt Pack contains 71 ChatGPT and Copilot prompts, including editing-pass prompts for cleaning up AI-generated executive drafts. £19.99, instant download.

Get the Executive Prompt Pack →

The editing pass that fixes the draft

Running the four fixes above in the order they appear in the deck takes about 30 minutes for a 10 to 12 slide draft. The order matters. Answer first, because the answer determines what the middle has to support. Context second, because cutting context reveals which supporting evidence is actually load-bearing. Position third, because only a defined recommendation can be defended. Close fourth, because the Q&A framing has to match the position the deck has committed to.

A useful way to run the pass is to read the deck end-to-end first, in one sitting, from the board’s perspective. Not yours. Not the drafter’s. The perspective of a board member who has already sat through three agenda items. What is missing? What would they actually decide on the basis of this? Where does their attention drift? The answers to those three questions tell you where to cut and where to sharpen.

The editing pass is where human judgement meets AI drafting. Copilot produces the draft. The presenter produces the decision-ready deck. The cleaner version of this is covered in Copilot PowerPoint for board presentations, which gives the three prompts that make the first draft closer to decision-ready from the start. And for the parallel view on tool selection, see Copilot vs ChatGPT for executive slides.

One more note. Do not skip the editing pass because the draft looks polished. Polish is the thing Copilot does best. Polish without a defended position is what gets board decks politely acknowledged and quietly shelved. The 30-minute pass is the work that prevents that outcome. The best Copilot PowerPoint prompts make the first draft stronger; the editing pass makes the final deck land.

Frequently asked questions

Can I prompt Copilot to avoid these four failures from the start?

Partially. The three prompts in the stakeholder-mapped, decision-framed, predicted-question sequence produce first drafts that are closer to decision-ready. But even those drafts need an editorial pass. No prompt gets you past the need for human judgement on what to keep, cut, and commit to.

How do I spot “buried answer” when I am close to the deck?

Open Slide 1 and read it aloud. If you cannot tell a colleague “this is what I am asking the board to do” from the text on that slide alone, the answer is buried. The fix is to rewrite Slide 1 as a one-sentence recommendation.

What if my organisation expects long, context-rich decks?

Separate the deck the board sees from the read-ahead pack. The read-ahead can contain all the context Copilot generated. The live deck is the 8-slide version with the answer first, the position defended, and the close framed. Most organisations respect this separation once they see it in action.

Does the editing pass work on ChatGPT drafts too?

Yes. The four failures are shared across both tools because they reflect how generative AI defaults to pattern-matched, essay-style output. The editing pass is the same. The difference is that ChatGPT drafts tend to be longer and may need more cutting; Copilot drafts tend to be shorter and may need more strengthening of position.

How long should I spend on the editing pass for a smaller internal deck?

The same time. A non-board internal deck does not have the same scrutiny, but the four failures still degrade any executive presentation. 30 minutes of editing is rarely the wrong investment for a deck that a senior audience will see.

The Winning Edge

Weekly thinking for senior professionals on executive presentation craft — slide structure, Q&A, delivery, AI, and the judgement calls the frameworks do not cover.

Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free Pyramid Principle Template — the structure that prevents the buried-answer failure in the first place.

Next step: pick the next AI-drafted deck on your desk. Run the 30-minute editing pass before you send it to anyone. The deck you send will land differently — and you will know why.

For a related deep-dive on the psychological side of AI-assisted executive work, see AI anxiety for executives.


About the author

Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, a UK company founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises senior professionals across financial services, healthcare, technology, and government on structuring presentations for high-stakes decisions.

06 May 2026
Senior leaders using AI often feel less credible, not more. The anxiety is real and the fix is not about better tools. It is about confidence boundaries.

AI Anxiety for Executives: When the Tech Makes You Feel Less Credible

QUICK ANSWER

AI anxiety for executives is not about the technology. It is about the quiet worry that using AI makes you seem less capable, less original, or less in control. The anxiety shows up as hesitation to use AI tools even when they would help, a reluctance to admit AI involvement, and a sense that the work is somehow not fully yours. The fix is not better tools. It is a clear internal boundary between what AI drafts and what you judge — and the recognition that judgement is the credible part.

For the underlying confidence work

Conquer Your Fear of Public Speaking is a structured programme for senior professionals whose anxiety shows up in high-stakes presentation moments.

Explore the Programme →

Astrid, a senior partner in a professional services firm, described something to me recently that she had not admitted to her peers. She had started using ChatGPT to structure her client-facing presentations. The output was genuinely better than what she had produced alone. Her clients had noticed. And she felt worse about her work than she had in years.

She was not worried about being caught. She was worried about something harder to name. It felt as though the good parts were not fully hers. Every time she gave a presentation that landed well, a quiet voice asked whether the landing was her skill or the tool’s. She had started avoiding AI for important client work — not because it made the work worse, but because it made her feel less capable.

This is AI anxiety for executives. It is not about AI. It is about the identity work that senior professionals do around competence, originality, and earned authority — and the way those things feel threatened when a machine starts producing drafts that hold up at their level.

What AI anxiety looks like in senior leaders

AI anxiety in senior professionals rarely announces itself. It shows up as a cluster of small behaviours that look like preferences but are really defences. The senior partner who avoids Copilot for the quarterly report “because I prefer to think on paper first.” The director who writes the first draft manually, then asks AI for minor edits, rather than the reverse. The executive who uses AI extensively in private and downplays it publicly. The leader who rereads their own output and cannot tell whether they wrote it or the AI did, and finds that a surprisingly uncomfortable question.

The common thread is that the anxiety runs alongside genuine capability. These are not people who need AI. They are people who have quietly noticed that AI makes some parts of their work easier, and who have started worrying about what that means. The worry is not irrational. It is about identity and signal.

The usual advice — “just use the tools, they are amazing” — misses the point. The anxiety is not technical. It is existential in the mild, everyday sense of that word. It is about what counts as your work and what counts as the tool’s work, and whether the distinction matters when the output is the same either way.

Why AI can feel like a credibility threat

Senior professionals have built credibility over years, often decades, through the accumulated evidence that they can produce good work reliably. The work is the signal. Reduce the visible effort behind the work and the signal weakens — at least, that is what the anxious part of the mind concludes. This is not a careful conclusion. It is a fast one, running in the background while the thinking mind is doing something else.

There is also a second layer. Senior audiences can increasingly tell when output has been AI-drafted. The tonal patterns, the structural defaults, the particular flavour of competent-but-generic writing — these become recognisable. Senior leaders who use AI start to worry that their audience will detect it, and that detection will be interpreted as laziness or as intellectual outsourcing. This worry is usually larger than the actual risk, but it is real.

Four ways AI anxiety shows up in senior professionals and what each behaviour is actually protecting

Underneath both layers is something worth naming directly. The real credibility of a senior professional is not in the words on the slide. It is in the judgement behind those words — which questions to ask, which data to trust, which argument to commit to, which risk to take. AI cannot replicate that. What AI can do is draft, assemble, and format. These are the parts of the work that are the least credibility-carrying, even though they take the most visible time.

Senior professionals who feel less credible when using AI are usually confusing the drafting with the judging. They still do the judging. AI does not. But because drafting becomes faster and more polished, the professional loses the visible evidence of the effort that was not actually the credible part in the first place.

WHEN ANXIETY SHOWS UP IN HIGH-STAKES PRESENTATION MOMENTS

Structured work for senior professionals whose presentation anxiety is affecting performance

Conquer Your Fear of Public Speaking is Mary Beth’s programme for professionals whose anxiety shows up in the moments that matter most — board rooms, client pitches, high-stakes presentations. Drawn from 5 years of personal experience with acute presentation anxiety and 16 years of coaching senior leaders through it.

  • Structured anxiety-reduction protocols for high-stakes moments
  • Pre-presentation preparation routines
  • In-the-moment recovery techniques
  • Mindset work for senior professionals
  • Instant download, lifetime access

£39, instant access.

Explore Conquer Your Fear of Public Speaking →

Designed for senior professionals facing high-stakes presentation moments.

The boundary that restores confidence

The fix for AI anxiety in senior leaders is not more or less AI. It is an explicit internal boundary between two categories of work. Category one is what AI drafts. Category two is what you judge. The boundary clarifies which parts of the work are credibility-carrying and which parts are operational.

AI drafts: the structural outline, the first-pass copy, the tonal calibration, the bullet points, the summary paragraphs. These are the visible parts. They were never where your credibility lived. Senior professionals with 20 years of experience do not have more credibility than junior professionals because they can write bullets faster. They have more credibility because they know which bullets matter.

You judge: which argument to build the deck around, which audience member is the real decision-maker, which risk to surface explicitly and which to leave in the appendix, which number to lead with, which counter-argument to engage directly, which option to recommend, which question to be ready for. Every one of these decisions is yours. AI cannot do any of them without your strategic inputs. You are still doing all the credibility-carrying work. The drafting just happens faster.

Once the boundary is clear, AI stops feeling like a threat to your competence. It becomes a drafting tool, like the word processor that you already use without any existential anxiety. The operational parts get faster. The judgement parts remain yours and always were. The clean version of this workflow is covered in why Copilot’s first draft fails boardroom tests, which shows exactly where AI drafting ends and human judgement takes over.

If you want a reliable starting point for AI prompts

The Executive Prompt Pack contains 71 ChatGPT and Copilot prompts designed for senior professionals — prompts you can use immediately without the anxiety of getting them wrong. £19.99, instant download.

Get the Executive Prompt Pack →

What to say if asked whether you used AI

The question “did you use AI for this?” is usually a proxy question. What the asker often wants to know is whether the presenter has understood the material well enough to answer questions about it. “Yes, I used AI to draft the structure, and then I made the decisions about what to keep, what to change, and what position to take” is a strong answer. It is also true. It separates the drafting from the judging, which is the distinction that matters.

Leading with “I didn’t use AI” when you did has a predictable cost. If any part of the output reads as AI-drafted — and senior audiences increasingly pick this up — the presenter has now lied about a small thing, which undermines trust on larger things. The pretence is not worth it.

Leading with “I used AI to draft this” without qualification sometimes lands poorly because it suggests the professional did nothing. The useful phrasing names both halves. “I drafted with AI, edited with judgement” — or a variation in your own words — captures the distinction accurately.

There is one context where AI involvement genuinely matters: client work, regulated decisions, or output that will be audited. In those cases, the correct thing to do is disclose according to the relevant rules, without anxiety about it. The rules exist because AI use is now a normal part of professional work, not an exception.

Frequently asked questions

How is AI anxiety different from ordinary presentation anxiety?

Ordinary presentation anxiety is about the moment of delivery — the racing heart, the shaking hands, the fear of freezing. AI anxiety is quieter and more cognitive. It happens before the presentation, often while preparing, and it is about identity rather than physiology. Both can coexist and both can affect performance, but they have different triggers and need different interventions.

Is there a point at which using AI for presentation work becomes inauthentic?

Authenticity in senior work is not about how much you wrote yourself. It is about whether the argument, decisions, and positions represent your thinking. If you used AI to draft the structure and then you committed to what the deck recommends because you believe it is the right recommendation, the deck is authentic. If you presented a recommendation you did not understand or did not agree with, the deck would be inauthentic — regardless of whether AI was involved.

Should I tell my board that I used AI to prepare the materials?

Usually not, and not because there is anything to hide. Board time is for decisions, not for explanations of drafting tools. If asked directly, answer honestly using the “drafted with AI, edited with judgement” framing. If not asked, there is no reason to offer the information unless your organisation has a disclosure policy.

I use AI extensively and feel fine about it. Am I missing something?

Probably not. People who have clear internal boundaries between AI drafting and their own judgement usually do not experience AI anxiety. The worry is most common in people who are either new to AI tools or who are uncertain about which parts of their work are credibility-carrying. If you have thought through the distinction and feel settled, you are where you want to be.

Can AI anxiety affect presentation delivery on the day?

Yes, indirectly. Senior leaders who feel uncertain about the provenance of their material sometimes deliver with less confidence than usual, even when the material itself is strong. This shows up as extra caveating, over-explanation, or a defensive edge during Q&A. The fix is the internal boundary described above — once it is clear, delivery confidence returns.

The Winning Edge

Weekly thinking for senior professionals on executive presentation craft — the judgement calls, confidence boundaries, and quiet practices that frameworks do not cover.

Subscribe to The Winning Edge →

Not ready for the full programme? Start here instead: download the free 7 Presentation Frameworks Quick Reference — the structural scaffolds that give your own thinking a reliable shape, with or without AI.

Next step: draw the boundary for yourself this week. Write down three parts of your next presentation that AI can draft and three parts that are yours to judge. Notice how different it feels when the distinction is explicit rather than implicit.

For the structural side of AI-assisted executive work, see Copilot PowerPoint for board presentations.


About the author

Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, a UK company founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises senior professionals on the psychology of high-stakes presentation work — including the quieter confidence issues that affect senior performers.