Tag: executive AI prompts

10 May 2026
Professional woman in a blazer writes notes at a wooden desk with a laptop and large monitor nearby.

Copilot Prompts for CFOs: Build a Budget Presentation in 45 Minutes

Quick answer: Copilot can compress the first draft of a CFO budget presentation from three hours to forty-five minutes — but only if you feed it a structured five-prompt sequence rather than a single open instruction. The order matters: strategic narrative first, then variance, then risk, then investment-versus-cost split, then Q&A pre-empt. Each prompt references the prior output, so Copilot builds on its own scaffolding rather than restarting. Without that sequence, Copilot produces a generic finance deck that fails the first board read-through. With it, you walk in with a draft that needs trimming, not rebuilding.

Anneliese Voss is the CFO of a mid-cap European industrials business. Last quarter she had a budget cycle from hell — a senior board sponsor on holiday, a finance team stretched thin by a system migration, and an audit committee meeting moved forward by three weeks. She had forty-five minutes between back-to-back meetings to produce the first draft of the FY budget presentation. She opened Copilot in PowerPoint, typed “create a budget presentation for the board covering next year’s plan with revenue, costs, headcount and capex”, and let it run.

What Copilot produced was not unusable. It was worse than that. It was generic — competent-looking slides with the structure of any budget deck, full of placeholder phrases like “strategic priorities” and “operational excellence”, with charts that mapped no real numbers to any real decision. Anneliese spent the next three hours rewriting almost everything. The forty-five-minute time saving was a forty-five-minute time loss.

The lesson Anneliese took into the next budget cycle is that Copilot is not a single-prompt tool for executive finance work. It is a five-prompt tool, and the sequence matters more than any individual prompt. When she returned to the same task with a structured sequence — narrative first, then variance, then risk, then investment-versus-cost, then Q&A pre-empt — the forty-five minutes produced a draft that needed editing, not rebuilding. The board read-through happened. The recommendation landed.

Want the full Copilot prompt library for executive presentations?

The Executive Prompt Pack is the practical library senior professionals use to get sharper, more strategic output from Copilot and ChatGPT — built for executive presentations, not generic decks. Seventy-one prompts covering strategic narrative, variance framing, board Q&A, executive summaries, and decision slides.

Explore the Executive Prompt Pack →

Why Copilot’s first draft fails the CFO test

The default failure mode of Copilot in finance work is not factual error. It is structural emptiness. Asked for a budget presentation, Copilot returns a deck that looks like a budget presentation — the right slide titles, the right chart shapes, the right boilerplate language about strategic priorities and operating efficiency. What it lacks is the load-bearing content that makes a budget presentation work: the bridge from prior period to current ask, the variance commentary that anticipates the audit committee’s questions, the explicit framing of which line items are investment and which are cost.

The structural emptiness is a function of the single-prompt approach. When you ask Copilot for “a budget presentation”, you are asking it to compress the entire reasoning of a finance team into a single inference pass. It cannot do that work. What it can do is build one specific layer of the deck if you give it one specific instruction at a time, and let it use its earlier output as the substrate for the next layer.

The other failure mode is voice. Copilot defaults to a corporate-press-release tone — “we are committed to driving sustainable growth across the portfolio” — that no senior finance audience tolerates. CFOs and audit chairs read that voice as a tell that the deck was generated, not authored. The fix is not to ban the AI but to constrain the voice in the prompt itself, repeatedly, with reference to specific style anchors. Why Copilot’s first draft fails boardroom tests covers the editing pass that strips this voice; the prompt sequence below avoids generating it in the first place.

Infographic showing the five-prompt Copilot sequence for a CFO budget presentation in order: strategic narrative, variance and prior-period bridge, risk and sensitivity envelope, investment versus cost split, and Q and A pre-empt, with each prompt feeding the next

The five-prompt sequence in order

The sequence below is the structural skeleton for any CFO-level budget presentation. It assumes you have already pasted the financial source data — variance table, prior-period actuals, FY plan, sensitivity assumptions — into the Copilot context, either as a file reference, a paste, or a chat thread that includes them. Without source data, Copilot will invent numbers, which is the only failure mode worse than generic output.

Each prompt in the sequence is short. Each one references the prior output rather than starting from scratch. Each one constrains voice and detail to what the next layer needs. The total time, with source data prepared, is forty to fifty minutes — about thirty minutes of Copilot generation and editing, plus ten to fifteen minutes of structural review.

The sequence is not a script. It is a scaffold. Real budget presentations have edge cases — a contested capex line, a flat headcount with rising salary cost, a foreign-exchange exposure that has moved since the last audit committee. The scaffold accommodates these by giving you a clean structural draft to deviate from, rather than starting from a blank slide.

The 71-prompt library that sharpens executive presentations

Build executive slides in 25 minutes, not 3 hours. The Executive Prompt Pack is a practical Copilot and ChatGPT prompt library for senior professionals who need their AI output to read like a senior finance leader wrote it — not a press release. £19.99, instant download, 71 prompts.

  • 71 ChatGPT and Copilot prompts engineered for PowerPoint presentations
  • Strategic narrative, variance framing, executive summary, and Q&A pre-empt prompts
  • Voice-constrained — built to avoid the generic AI tone CFOs and audit chairs reject
  • Works inside Copilot for PowerPoint and ChatGPT — copy, paste, adapt
  • Designed for executive presentations: budget, board, investment committee, steering

Get the Executive Prompt Pack →

Built for senior professionals presenting budgets, plans, and decisions to boards and audit committees.

Prompt 1 — Strategic narrative frame

The first prompt does not produce slides. It produces the narrative spine of the deck — the three-sentence answer to the question “what is the board being asked to approve, and why now”. Without this spine, every subsequent slide drifts. With it, each slide has a job: support the spine, qualify the spine, or quantify the spine.

The prompt itself: “Using the source data provided, draft three sentences that frame the FY budget request for an audit committee audience. Sentence one names the headline ask in financial terms. Sentence two identifies the strategic shift the budget supports versus prior year. Sentence three names the single largest risk and how the budget addresses it. Voice: senior finance leader speaking to audit committee, no marketing language, no platitudes.”

The output should be three sentences, no more. If Copilot produces a paragraph, ask it to compress to three sentences and remove any phrase that could appear in any company’s annual report. The compressed three sentences become the title slide narrative, the executive summary slide, and the closing recommendation slide — three slides anchored by one consistent message. A CFO-approved budget presentation template uses this same three-sentence spine as its structural base, regardless of company size or sector.

Prompt 2 — Variance and prior-period bridge

The variance slide is the slide that audit committees and boards spend most time on. It is also the slide Copilot is least naturally good at, because it requires reading the prior-period numbers, the current-period plan, and the bridging logic — and many AI tools attempt the third without securing the first two.

The prompt: “Using the prior-period actuals and FY plan in the source data, build a bridge slide that walks from prior-year actual to FY plan in four to six steps. Each step is a single line item or category. Each step has a value (positive or negative versus prior year) and a one-line rationale. Order the steps largest first. Do not invent any numbers. If a number is not in the source data, write [TBC] in its place.”

The “[TBC]” instruction matters. It is the constraint that prevents Copilot from filling gaps with plausible-looking inventions — the most dangerous failure mode in finance work. The bridge slide that comes back will not be perfect, but every number on it will be either real or marked as missing. The editing pass becomes verifying real numbers and filling marked gaps, rather than discovering invented ones.

For an audit-committee-grade variance slide, the bridge format is non-negotiable: prior-year base, plus or minus volume effect, plus or minus price effect, plus or minus mix or cost effect, plus or minus FX or one-off, equals current-year plan. Copilot will follow this format if you specify it. The deck the audit chair sees then matches the format the audit chair expects, which removes one layer of friction from the read-through.

Diagram of a CFO budget bridge slide showing prior-year actual to FY plan in five labelled bridge steps with positive and negative variance values, illustrating the format senior finance leaders use to walk audit committees through year-on-year change without losing the room

Prompt 3 — Risk and sensitivity envelope

Risk slides in budget presentations fail in two predictable ways. They list every risk imaginable — twelve bullets, no prioritisation — and the audit committee tunes out. Or they list the top three risks but provide no sensitivity analysis, leaving the committee unable to weigh the materiality. Copilot will produce either of these failure modes by default. The prompt has to push it past both.

The prompt: “Using the FY plan from prior outputs, produce a risk slide with three components. First, the top three downside scenarios, ranked by impact on operating profit. Each scenario has a one-line description, a quantified impact (range, not point estimate), and a likelihood band (low, medium, high). Second, the single upside scenario most likely to materialise. Third, the single mitigating action the budget already funds against the largest downside. Voice: factual, no hedging language, no qualifiers like ‘subject to market conditions’.”

The “no hedging language” instruction is critical. Copilot defaults to qualifying every risk statement, which produces slides that read as if the finance function is hedging the hedge. Audit committees read that as evasion. The cleaner the risk slide, the more credible the budget. The prompt forces the cleanliness.

What you get back is a risk slide that names three downsides with quantified impact, names one upside, and names one mitigating action. That structure is what executive finance audiences want to see — risks acknowledged, sized, and managed — and what most budget decks fail to deliver. The slide will need editing, but the structure will be right. The Executive Prompt Pack includes voice-constrained risk-slide prompts for budget, capex, and strategic-plan presentations, each tuned to the audience that reads them.

Prompt 4 — Investment-versus-cost split

Most budget presentations conflate two very different categories of spend. There is cost — the spend required to keep the business running at current capability. And there is investment — the spend that builds new capability, capacity, or revenue. When the deck blurs the two, the audit committee cannot tell whether a year-on-year increase is operational drift or strategic intent. The board cannot tell whether to approve.

The prompt: “Using the FY plan, produce a single slide that splits total budget into two columns: cost-to-operate and investment-to-grow. Each column shows the top three line items by value, with year-on-year change versus prior period. Add a one-line description of what each investment line item is funding. Add a closing line stating what proportion of total budget is investment versus prior year. No marketing language. Use plain finance vocabulary.”

The split is what allows the audit committee to weigh the budget strategically rather than operationally. A flat or rising cost-to-operate raises questions about discipline. A rising investment-to-grow raises questions about return. Putting both side by side on a single slide forces the committee to discuss the right thing — strategic shift — rather than the wrong thing — line-by-line line-item review.

Ready for the full AI presentation framework, not just prompts?

The Maven AI-Enhanced Presentation Mastery course is the self-paced programme for senior professionals using AI to build executive-grade presentations. 8 modules, 83 lessons, 2 optional recorded coaching sessions. £499, lifetime access — monthly cohort enrolment, no deadlines, no mandatory attendance.

Explore AI-Enhanced Presentation Mastery →

Prompt 5 — Q&A pre-empt

The fifth prompt does not produce a slide. It produces a one-page Q&A pre-empt — the five questions the audit committee is most likely to ask, with the structured answer for each. This page does not appear in the deck. It sits in your speaking notes and in the appendix, available if a question lands.

The prompt: “Based on the FY plan, variance slide, risk slide, and investment-versus-cost split produced in earlier outputs, generate the five questions an audit committee is most likely to ask. For each question, draft a forty-five-second structured answer in three parts: acknowledge the question, give the directly responsive number or fact, then bridge to the broader strategic position. No filler, no hedging. Voice: senior finance leader, decision-confident.”

The Q&A pre-empt is the layer most often skipped in budget preparation, and the layer most often regretted. A budget presentation that lands cleanly in the read-through can still lose the room in Q&A if the CFO is caught flat-footed by a question that was always going to come. Five minutes producing this prompt, ten minutes editing the answers, and you walk in with the structured response to the questions you are most likely to face.

This is also the prompt where Copilot’s value compounds the most. Because each prior prompt has been constrained, voice-controlled, and built on the same source data, the Q&A pre-empt the AI produces is grounded in the same numbers and same framing as the deck. Without the prior sequence, a stand-alone Q&A prompt produces generic interview-coaching language. With it, the questions and answers map directly to the slides the audit committee just read.

What Copilot still cannot do for you

The forty-five-minute draft is real, but the draft is a draft. Three things still need a senior finance human, and skipping any of them is the difference between a deck that lands and a deck that gets sent back for rework.

The first is the materiality judgement. Copilot will treat all numbers as equally significant. The judgement of which line items deserve airtime in a forty-minute audit committee slot, and which can sit in appendix or be summarised, is yours. The deck the AI produces typically has eight to twelve content slides; the deck the audit committee should see has five to seven. Cutting from the first to the second is structural editing, not prompt engineering.

The second is the political read. Every audit committee has live tensions — a contested capex line, a sponsor with a known view, a chair who is sceptical of headcount growth. Copilot does not know any of this. The strategic narrative the AI drafts will be technically correct but politically naïve. The CFO’s job is to bend the narrative around the live tensions — softening where appropriate, hardening where the case is strong, naming the elephant in the room where the room is going to ask anyway.

The third is the proof obligations. Copilot will state things the deck cannot defend. “Our cost discipline programme is on track” sounds fine until the audit chair asks for the run-rate evidence. Every claim in the deck has to be verifiable in the underlying numbers. The editing pass is the discipline of striking any sentence the budget pack itself does not prove.

None of these three jobs is being automated soon. What is being automated is the structural drafting — the work of taking source data and turning it into a passable executive deck format. That work used to take a CFO and finance team three hours. With the right prompt sequence, it now takes forty-five minutes, and the saved time goes back into the materiality judgement, the political read, and the proof discipline that AI cannot do.

Stop spending three hours on the structural draft of your budget deck.

The Executive Prompt Pack — £19.99, instant download — gives you the seventy-one Copilot and ChatGPT prompts that compress executive presentation drafting from hours to minutes, with voice and structure already constrained for senior finance audiences.

Get the Executive Prompt Pack →

Built for CFOs, finance directors, and senior professionals presenting budget, plan, and capex decisions.

FAQ

Does Copilot in PowerPoint actually read my source data, or do I need to paste numbers into the prompt?

Copilot in PowerPoint reads from open files in your Microsoft 365 environment if you reference them by name in the prompt — for example, “using the FY26 plan in BudgetPack.xlsx”. For documents not in the same workspace, paste the source numbers directly into the prompt or chat thread. Copilot will not invent numbers if you provide them and instruct it to flag missing values with [TBC]. Without source data, it will produce plausible-sounding but unverifiable figures, which is the worst failure mode in finance work.

Can I run all five prompts in a single Copilot session, or do I need to start fresh each time?

Run them in a single session. The reason the sequence works is that each prompt builds on the prior output — the variance prompt references the strategic narrative, the risk prompt references the variance, the Q&A pre-empt references all four. Starting fresh between prompts loses that compounding context, and the AI returns to generic defaults. Keep the chat thread open across all five prompts; the saved context is the productivity gain.

What if my company restricts Copilot for sensitive finance data?

Many finance functions operate Copilot in a tenanted Microsoft 365 environment with data-residency and protection controls — that is the configuration most large enterprises use for AI in sensitive workflows. If your IT or compliance function has not yet approved Copilot for finance data, the same prompt sequence works in any Copilot-equivalent enterprise AI assistant your organisation has approved. The structural sequence is the productivity unlock; the specific tool is interchangeable.

How much editing should the forty-five-minute draft actually need?

Roughly thirty per cent of the content, in our experience with senior finance leaders. The structural skeleton, the bridge format, and the risk-slide structure should be usable as drafted. The voice in places — particularly any phrasing Copilot defaults to that reads as marketing — needs replacing. The materiality call (which line items deserve their own slide) needs human judgement. The proof discipline (every claim verifiable) needs the CFO’s eye. Treat the forty-five-minute output as a structural draft, not a finished deck.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Not ready for the full prompt library? Start here: download the free CFO Questions Cheatsheet — the questions audit committees ask in budget read-throughs, and the structured response format that lands cleanly under pressure.

Next step: open the next budget deck on your calendar and run the first prompt — the strategic narrative frame. Three sentences, audit-committee voice, no marketing language. That five-minute exercise is the foundation everything else in the deck rests on; once it is right, the rest of the sequence builds itself.

Related reading: copilot prompts for executive presentations across the wider executive deck library, and why Copilot’s first draft fails boardroom tests, and the editing pass that fixes it.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 25 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.

25 Feb 2026
Executive with glasses evaluating AI-generated presentation on laptop screen, chin resting on hand in critical thought, printed slide documents on desk beside him

AI Presentation Structure: AI Can Write Your Slides. It Can’t Structure Your Argument.

I watched a board ignore 22 perfect AI-written slides — because not one of them asked for a decision.

Quick Answer: AI generates content — clear sentences, reasonable data points, professional formatting. What it can’t generate is AI presentation structure: the decision architecture that determines which slide goes where, what the room needs to decide, and why the evidence is sequenced to lead them there. If you ask AI to “create a board presentation,” you’ll get 15-20 slides of competent content with no argument. The fix: build the structural skeleton first (what decision, what recommendation, what evidence in what order), then use AI to fill each section.

A client — a VP at a technology company — sent me his board presentation and asked for feedback. It was 22 slides. Beautifully written. Consistent formatting. Every slide had clear bullet points and supporting data.

He’d used ChatGPT to build it, and the output was impressive. Clean language. Professional tone. Relevant content.

One problem: nowhere in 22 slides did it say what decision the board needed to make.

There was no recommendation. No “I’m asking for X by Y date.” No comparison of options with trade-offs. No cost of inaction. Just 22 slides of well-written information, sequenced in the order the AI had generated it — which was the order of his prompt, not the order of a decision-first argument.

I asked him: “If the board reads only slide 1, do they know what you’re asking for?” He looked at slide 1. It was a project overview. They wouldn’t know the decision until slide 19.

We restructured in 90 minutes. Same data, same AI-written content — but reorganised around a decision architecture. Recommendation on slide 2, evidence supporting it, options with trade-offs, specific ask with a deadline.

The board approved it in the first 10 minutes.

🚨 Built a presentation with AI and it feels flat? Quick check: Does slide 1 tell the room what decision you need? If the decision is on slide 15+, you have a content deck, not an argument.

→ Need the structural skeleton that makes AI output land? Get the Executive Slide System → £39

The Difference Between Content and Structure (And Why AI Only Gives You One)

Content is what your slides say. Structure is the order they say it in and why.

AI is extraordinarily good at content. Ask ChatGPT to “write a slide about Q3 revenue performance” and you’ll get a clear, professional summary with relevant data points. Ask it to “write 15 slides for a board presentation on Project Phoenix” and you’ll get 15 clear, professional slides.

What you won’t get is an argument. Because an argument requires something AI doesn’t have: knowledge of the decision-maker, the political context, the urgency, the alternatives, and the specific outcome you need from the room.

AI presentation structure fails because AI sequences content in the order it was prompted, not in the order that leads a room to a decision. It generates in narrative order (background → context → analysis → findings → recommendation) when executive communication requires decision-first order (recommendation → evidence → options → ask).

This is the fundamental gap. It’s not about better prompts, more specific instructions, or a different AI tool. It’s about the structural logic that determines what goes on slide 1, what goes on slide 5, and what the room is doing on slide 10.

For more on the difference between AI-enhanced and AI-generated presentations, see the full comparison.

Why do AI-generated presentations fail with executives?

Because executives read slides in decision mode — they’re looking for the recommendation, the risk, the cost, and the ask. AI generates slides in information mode — sequenced to inform, not to persuade. When an executive hits slide 5 and still doesn’t know what you’re asking for, they check out. The content might be better than anything you’d write manually. But without decision architecture, it’s like having a perfectly worded email with no subject line.

Why AI Presentations Fail in Executive Settings

After reviewing hundreds of AI-generated executive decks — from clients using ChatGPT, Copilot, Gamma, and others — I see the same three structural failures every time.

Failure 1: The recommendation is buried. AI typically generates in chronological or logical order: background first, analysis second, conclusions third, recommendation last. In a 20-slide deck, the recommendation lands on slide 17-20. By then, three executives have left and two more are on their phones. Executive presentations need the recommendation on slide 1 or 2 — everything after that is evidence supporting the ask.

Failure 2: No options or trade-offs. AI generates a single recommendation because that’s what it was asked for. But decision-makers need options. “I recommend A” gives the room two choices: yes or defer. “Here are three options with costed trade-offs, and I recommend A because…” gives them agency. AI doesn’t create options unless specifically prompted — and even then, it doesn’t quantify the trade-offs the way an executive audience needs.

Failure 3: No cost of inaction. The most powerful slide in any decision deck is the one that shows what happens if the room doesn’t decide. AI never generates this slide because it doesn’t understand that executive meetings exist to make decisions, and that deferral is the default outcome unless you make it expensive. The decision slide structure includes this by default — AI doesn’t.

⭐ Give AI the Structure It’s Missing — Then Let It Do What It’s Good At

The Executive Slide System gives you 22 structural skeletons — the decision architecture AI can’t generate. Each template tells you what goes on every slide and why. Then the 51 matched AI prompts (Draft → Refine → Executive Polish) fill the structure with content that sounds like you.

Your structure-first AI toolkit:

  • 22 executive slide templates — the structural skeleton for board decks, status updates, proposals, and recommendations
  • 51 AI prompts in 3 stages: Draft (generate content), Refine (sharpen for audience), Polish (stress-test as a skeptical CEO)
  • 15 scenario playbooks — find your exact situation, follow the template + prompt sequence like a recipe
  • Decision architecture built into every template — recommendation, options, cost of inaction, specific ask

Get the Executive Slide System → £39

Built from 24 years of executive presentations — the structural logic AI doesn’t have.

The Structure-First AI Workflow: Decision → Skeleton → AI

The fix is simple but counterintuitive: you need to build the structural skeleton BEFORE you open AI. Most people do the opposite — they prompt AI first, then try to restructure the output. That’s backwards.

Step 1: Define the decision. Before you write a single prompt, answer: “What specific decision do I need from this room?” Not “inform them about the project.” Not “update them on progress.” A decision: “Approve £400K additional budget by March 7.” If you can’t state the decision in one sentence, you’re not ready to build slides — with or without AI.

Step 2: Build the skeleton. Choose a template that matches your scenario. A board presentation needs a different skeleton than a project status update, which needs a different skeleton than an investment proposal. The skeleton determines what goes on each slide and in what order — recommendation first, evidence second, options third, ask last.

Step 3: Prompt AI to fill each section. Now — and only now — use AI. But not with a single prompt like “create a board presentation.” Instead, prompt section by section: “Write the executive summary for a £400K technology investment. The recommendation is to approve. The key evidence is…” When AI fills a pre-built structure, the output has the decision architecture the room needs.

This is the approach that turned my client’s 22-slide information deck into a 12-slide decision deck — same data, same AI-generated language, fundamentally different outcome.

For a library of proven prompts, see the complete guide to ChatGPT prompts for presentations.

The 3-Prompt System: Draft → Refine → Executive Polish

One prompt doesn’t produce executive-quality output. Three prompts do — if they’re sequenced correctly.

Prompt 1: Draft. Generate the content for a specific slide or section. Be specific about the scenario, the audience, and the data. “Create content for a Q3 business review for the finance committee. Include: revenue vs target, three significant wins with quantified impact, two challenges with root causes, and three priorities for next quarter.”

Prompt 2: Refine. Sharpen the output for the specific audience. “Make this more impactful for a CFO audience. Each win should quantify business impact. Challenges should include what we’re doing about them. Remove metrics that don’t connect to business outcomes.”

Prompt 3: Executive Polish. Stress-test it. “Review this through the eyes of a CEO with five other meetings today. What would they skip? What questions would they ask? Strengthen the ‘so what’ for each point. Ensure the decision is specific and time-bound.”

Each prompt layer adds something the previous one didn’t: the Draft gives you content, the Refine makes it audience-specific, and the Polish makes it decision-ready. Without the structural skeleton underneath, all three layers produce better-written information. With the skeleton, they produce an argument.

The Structure-First AI Workflow showing three steps from decision definition through structural skeleton to AI content filling

The 51 AI prompts in the Executive Slide System are pre-written in the Draft → Refine → Polish sequence for every template — so you’re not writing prompts from scratch. Open the template, run the three matched prompts, and the structural skeleton fills itself with executive-quality content. Get the Executive Slide System → £39

What AI IS Good At (Once the Structure Exists)

This isn’t an anti-AI article. AI is transformative for presentations — but only when it fills a structure rather than creating one.

Once you have the decision architecture in place, AI excels at: generating clear, professional language for each section; stress-testing your content from the audience’s perspective; finding gaps in your logic that you’ve become blind to; polishing language to be more concise and direct; and creating supporting data visualisations.

The combination of human structure + AI content is more powerful than either alone. You bring the judgement (what decision, what audience, what politics). AI brings the execution speed (clear language, consistent tone, gap identification). The structural skeleton is the interface between the two.

The professionals who are most effective with AI aren’t the ones writing the best prompts. They’re the ones who know what the room needs BEFORE they open ChatGPT. Structure first. AI second. That’s the workflow that gets decisions.

⭐ Stop Getting 22 Slides of Information and Zero Decisions

The Executive Slide System is the structural skeleton that makes AI output actually work in executive meetings. Each of the 22 templates includes the decision architecture — recommendation position, evidence sequence, options framing, specific ask — that AI can’t generate on its own.

Your structure-first AI deliverables:

  • 22 structural templates — recommendation-first, decision-ready, each with mapped slide sequence
  • 51 matched AI prompts — 3 per template (Draft → Refine → Executive Polish), pre-written and ready to paste
  • 15 scenario playbooks — find your exact situation, follow template + prompt sequence in under 30 minutes
  • 6 checklists — verify decision readiness, argument logic, and executive clarity before presenting

Get the Executive Slide System → £39

The structural logic from 24 years of executive banking + 51 AI prompts that fill it in minutes. Structure first. AI second. Decisions always.

The 15 scenario playbooks in the Executive Slide System tell you which template to open AND which AI prompts to run for your specific situation — so the structure-first workflow takes 30 minutes, not 3 hours. Get the Executive Slide System → £39

Is This Right For You?

✓ This is for you if:

  • You’ve used AI for presentations but the output feels flat, informational, or doesn’t get decisions
  • You want the structural logic that makes AI-generated content land with executive audiences
  • You want pre-written AI prompts matched to specific executive scenarios

✗ This is NOT for you if:

  • You don’t use AI for presentations and don’t plan to start
  • You’re looking for visual design templates (this is structural logic, not design)

⭐ 24 Years of Board-Level Decision Decks — Now a Structure AI Can’t Mess Up

Every template in the Executive Slide System was built from real executive approvals — board papers, SteerCo recommendations, ExCo investment cases. The decision architecture that got those approved is now the skeleton your AI fills.

Your AI-ready decision architecture:

  • Decision slide order that forces “what are you asking for?” onto slides 1–2 (not slide 19)
  • Options + trade-off slide formats executives actually use to decide — with costed consequences
  • Cost-of-inaction slide prompts — the missing slide in 90% of AI-generated decks
  • 51 matched AI prompts (Draft → Refine → Executive Polish) pre-written for every template

Get the Executive Slide System → £39

Built from board approvals, SteerCo recommendations, and ExCo investment cases at JPMorgan, RBS, PwC, and Commerzbank. Instant download. 30-day money-back guarantee.

Frequently Asked Questions

Can’t I just write better prompts instead of using templates?

Better prompts produce better content — but content isn’t the problem. The problem is structural logic: what goes on slide 1, what goes on slide 5, why the evidence is sequenced the way it is. No prompt, however sophisticated, gives AI the knowledge of your decision-maker, the political dynamics in the room, or the specific decision the meeting exists to make. Templates provide the structural skeleton that prompts can’t. Then prompts fill it brilliantly.

Does this work with ChatGPT, Copilot, and other AI tools?

Yes — because the structural problem is universal across all AI tools. ChatGPT, Copilot, Gamma, Claude, and every other AI presentation tool generates content in information mode. None of them generate in decision-first mode unless you provide the structure first. The templates work with any tool. The 51 AI prompts are written for ChatGPT-style interfaces but adapt to any conversational AI.

How long does the structure-first workflow take?

About 30 minutes for a complete executive deck. Five minutes to choose the right template for your scenario (the playbooks tell you which one). Five minutes to define the decision, recommendation, and key evidence points. Twenty minutes to run the three prompts per section and review the output. Compare that to 3-4 hours of prompt-iterate-restructure-prompt cycles when starting with AI alone.

What if my presentation is informational, not decision-based?

Most presentations that claim to be “informational” actually contain an implicit decision. A project status update implicitly asks “should we continue as planned?” A quarterly review implicitly asks “is this team performing?” If you genuinely need to inform without seeking a decision — a training session or a knowledge-share, for example — AI alone works fine. But for any presentation to leadership, there’s almost always a decision embedded. Find it, make it explicit, and build the structure around it.

📬 The Winning Edge — Weekly Newsletter

One executive presentation insight per week. AI workflows, structural frameworks, and the decision-first thinking that makes both work. No filler.

Subscribe Free →

Read next: AI handles slides. Q&A handles everything else. Read When You Don’t Know the Answer: 3 Responses That Save You in Q&A — the scripts for when AI can’t help.

Read next: If your next presentation involves giving sensitive feedback, read The Sandwich Feedback Trap: Why It Fails When You Critique Up (And the Mirror Structure That Works).

If your board pack goes out tomorrow morning — or your SteerCo pre-read is due by 5pm — don’t let AI decide the slide order. Build the structural skeleton first. Then let AI fill it. That’s how 22 slides of information become 12 slides that get a decision.

About the Author

Mary Beth Hazeldine is the Owner & Managing Director of Winning Presentations. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she has delivered high-stakes presentations in boardrooms across three continents.

A qualified clinical hypnotherapist and NLP practitioner, Mary Beth combines executive communication expertise with evidence-based techniques for managing presentation anxiety. She has trained thousands of executives and supported presentations for high-stakes funding rounds and approvals.

Read more articles at winningpresentations.com