Tag: copilot disclosure

08 May 2026

“Did You Use AI for This?” — How to Answer When a Board Member Asks

Quick answer: When a board member asks if you used AI to build the deck, the answer is yes (if you did). The deflection that ruins careers is the hesitation, not the truth. Use the three-part response: confirm tool use plainly, name the part you owned, name the verification you applied. The whole reply takes under thirty seconds. Done well, the question dissolves and the room moves on. Done badly — with hedging, irritation, or evasion — the question becomes the meeting.

Kenji was eight minutes into a quarterly results presentation when the non-executive director on his right tilted her head and said, gently but clearly, “Just a quick one — did you use AI for any of this?” The room went quiet in the way rooms do when an unscripted question lands. Kenji’s first instinct was to say “no, of course not” — even though he had used Copilot to draft the structure and roughly half the headlines. The lie would have been easy. It also would have been a career-shaping mistake.

He took a beat. He said: “Yes — I used Copilot to draft the structure, and I rewrote the analysis and the recommendation myself. The numbers in slide six and slide nine I personally verified against the source data.” Total response time: seventeen seconds. The non-executive director nodded once, said “thanks”, and the room moved on. By the end of the meeting nobody mentioned the AI question again, and Kenji’s recommendation was approved.

What saved Kenji was not the truthfulness alone, although the truthfulness mattered. It was the structure of the answer. The three-part response — confirm, own, verify — handles the question cleanly because it gives the room everything it needs to assess your credibility in one short reply. Most presenters who fumble this question do so because they have not pre-built the structure. They are composing under pressure, and what comes out is hedging, defensiveness, or over-explanation. All three escalate the question instead of resolving it.

Looking for a structured way to handle tough questions in executive Q&A?

The Executive Q&A Handling System is designed for senior professionals who need to handle tough questions, calm authority, and decision-safe answers under board-level pressure.

Explore the Q&A System →

Why the question gets asked

“Did you use AI for this?” is rarely the literal question. It is a proxy for one of three underlying concerns the board member has not stated explicitly. Understanding which concern is in play tells you what your response actually needs to address.

The first underlying concern is verification. The board member has spotted a phrasing, a claim, or a piece of language that does not feel like it came from someone who knows the business. They are checking whether what they are looking at has been verified by a human who understands the context. The right response anchors the verification work — the parts you personally checked against source data, the editorial decisions you made on top of any AI draft.

The second underlying concern is governance. Some board members are tracking AI use as a corporate risk topic — data privacy, intellectual property, model bias, regulatory exposure. The question is partly about you and partly about the organisation’s broader AI posture. The right response acknowledges the tool use without minimising it and signals that the work was done within whatever AI guidelines are in place.

The third underlying concern is competence. The board member wants to know whether you, the presenter, can answer questions beyond what is on the slides — or whether the AI has produced material you could not defend if pressed. The right response demonstrates ownership of the analysis and recommendation: not “the AI thinks”, but “I think”. The competence concern is the most common driver of the question and the one that most rewards a confident, structured reply.

Dashboard infographic showing the three underlying concerns behind the AI use question — verification, governance, and competence — with the response element each concern requires

The three-part response structure

The structure has three parts, in this order. Reordering or skipping any of them weakens the response. Each part is a short sentence. The whole reply takes between fifteen and thirty seconds.

Part one: confirm tool use plainly. “Yes — I used Copilot to draft the structure.” Or: “Yes — I used ChatGPT to summarise the source documents.” Or: “No, this was written by hand.” The plain confirmation does two things. It removes any sense that you are hesitating to admit something. And it answers the literal question, which clears the way for the parts that actually address the underlying concern.

The most common error here is qualifying the confirmation with a defensive softener. “Yes, but only for the structure.” “Yes, but I also rewrote everything.” “Yes, although obviously the analysis is mine.” The “but” and “although” signal that you think the AI use is something to apologise for, which contradicts the calm authority the room is reading you for. Confirm cleanly. The qualifying work belongs in part two, not part one.

Part two: name the part you owned. “The analysis and recommendation are mine.” Or: “The conclusion in slide twelve is my judgement; the model surfaced the framing question.” Or: “The structural sequence reflects my view of how the committee thinks; I used the AI to draft the headlines and then rewrote the ones that did not land.”

This part is where the competence concern gets resolved. You are explicitly naming what you contributed, in a sentence that demonstrates you can articulate the boundary between AI output and human judgement. Board members trust presenters who can name their contribution precisely. They distrust presenters who claim everything as their own (which is implausible after admitting AI use) or who minimise their own contribution (which suggests they did not really do the work).

Part three: name the verification you applied. “The numbers in slide six and slide nine I personally verified against the source data.” Or: “I cross-checked the regulatory citation in slide eight with our compliance team.” Or: “The competitive comparison was reviewed by our strategy lead before this meeting.”

This part addresses both the verification concern and the governance concern in one move. It signals that you did not simply pass through the AI output — you treated it as a draft that required senior verification. Specific verification details are more credible than general assurances. “I checked the numbers” is weaker than “the numbers in slide six and slide nine I verified against the source data”. Specificity buys credibility.

Five failure modes that escalate the question

The same question lands very differently depending on how it is handled. Five specific failure modes consistently escalate “did you use AI” from a passing query into a meeting-derailing exchange.

The hedge. “Well, I used some AI to help with parts of it…” This signals discomfort and invites follow-up. The board reads the hedge as evasion, not honesty. The fix is the plain confirmation in part one of the structure.

The denial. “No, I wrote the whole thing myself.” If this is true, say it. If this is false, do not say it. The risk-reward maths is stark: the upside of a successful denial is small; the downside of a denial that gets exposed (a chief of staff who knows you used Copilot, an artefact in the file metadata, a bullet that obviously came from a model) is career-defining. Never lie about AI use. The question is not worth the risk.

The over-explanation. “Yes, I used Copilot, but you have to understand that the way I use it is more like a research assistant than a writer, and obviously the conclusions are mine because the model couldn’t possibly know our specific situation, and I always verify everything…” Over-explanation reads as guilt. The board reads the length of your reply as a measure of your discomfort. Keep the answer to thirty seconds maximum. Anything longer triggers the suspicion the short answer would have prevented.

Stacked cards infographic showing five failure modes when answering 'did you use AI' — the hedge, the denial, the over-explanation, the irritation, and the technical lecture — with the corrected response for each

The complete framework for executive Q&A under pressure

The Executive Q&A Handling System is the structured framework for senior professionals presenting to boards and executive committees. Tough questions, calm authority, decision-safe answers in 45 seconds. £39, instant access.

  • Structured response patterns for the most common executive question types
  • Recovery techniques for when a question lands harder than expected
  • Frameworks for hostile questions, multi-part questions, and trap questions
  • Designed for board, investment committee, and executive committee scenarios

Get the Executive Q&A Handling System →

Designed for senior professionals managing high-stakes Q&A in executive presentation contexts.

The irritation. “Does it really matter how I built the slides?” Or: “I’m not sure why that’s relevant.” Both responses cast the question as inappropriate, which puts the questioner on the defensive and turns the exchange into a status confrontation. Even when you privately think the question is petty, do not signal that thought. Treat the question as legitimate, answer it cleanly, move on.

The technical lecture. “Well, the way Copilot Agent Mode works is that it chains multiple sub-tasks, and I gave it instructions to…” Board members did not ask for a tutorial on AI capabilities. They asked whether you used the tool. Stay at the level the question was asked. If they want technical detail, they will follow up.

Likely follow-up questions and how to handle them

If the three-part response is delivered well, follow-up questions are uncommon. When they do come, they tend to fall into a small number of patterns. Knowing the patterns lets you respond without composing under pressure.

“How do you know the AI didn’t make something up?” Address the verification process specifically. “Every quantitative claim in the deck I verified against the source documents — the model has a tendency to restate numbers in ways that are close but not exact, so I treat every figure as a flag for verification. The claims in slides four, six, and twelve I cross-checked with [name of the source / colleague / function].”

“Are we comfortable with this from a data privacy perspective?” This is a governance question and it deserves a governance answer. “I used the enterprise version of Copilot, which keeps data within our tenancy and does not train external models on our inputs. This complies with our current AI use guidelines.” If you do not know the answer to this question definitively, do not improvise. Say: “I followed the AI guidelines our IT team published in [month]. If you want a more detailed assessment, [name of CIO / DPO / equivalent] can give you the full picture.”

“Could you have produced this without AI?” Almost always yes, and you should say so. “Yes — it would have taken me about three additional hours of structuring and drafting time, which is the time AI saved on this deck. The analysis itself was the same work either way.” This handles the implicit doubt about competence by making clear that AI affected your speed, not your capability.

“What else have you used AI for?” Be honest, be brief, and be specific. “For executive presentation work, I use Copilot for first-draft structure, source-document compression, and Q&A pre-mortems. For [other categories of work], I follow the same pattern of AI draft plus human verification.” Avoid sweeping statements like “I use it for everything” or “almost nothing” — both invite follow-up. Naming specific workflows is more credible than describing your AI use in general terms.

The prevention move: pre-empting the question entirely

The cleanest handling of the AI question is the version where the question never gets asked, because the deck does not telegraph AI use. The board member who asked Kenji’s question did so because something in the deck — a slightly generic phrasing, a too-symmetrical structure — pinged her ear. If the editorial pass on the AI draft had removed those signals, the question might not have surfaced.

The prevention move is the editorial pass itself. Rewrite generic headlines as findings. Anchor every claim to specific evidence the audience recognises as internal. Replace AI-flavoured phrasing with your organisation’s actual vocabulary. Cut the slides the AI added because they “completed” a section. The same editorial moves that produce a deck that gets approved also produce a deck that does not invite the AI-use question. The editorial pass is the prevention.

None of this means concealment. If you are asked, you answer truthfully using the three-part structure. But the editorial pass means the question gets asked less often, because the deck reads as senior thinking from inside the business — which is what board members are looking for in the first place. The AI underneath becomes irrelevant. The deck is yours either way.

FAQ

What if I used AI but I genuinely cannot remember what was AI-drafted versus what I wrote?

This happens, particularly when the editorial pass has been thorough. The honest answer is “I used Copilot for the first draft and then heavily edited the result; the final version reflects my analysis, but I would not be able to point to a specific bullet and tell you whether the original wording came from the model or from me.” That answer is credible because it acknowledges the merged nature of the work without trying to claim authorship of every word. Most board members will accept it without follow-up.

Should I disclose AI use proactively even if not asked?

Usually no, unless your organisation has an explicit disclosure requirement or unless the deck includes a specific element (a quoted figure, a regulatory citation) that you want to flag for additional verification. Proactive disclosure tends to draw attention to AI use rather than normalise it, and it can read as defensive. The exception is environments where disclosure is genuinely expected — academic settings, some regulated industries, and any organisation with a stated AI-use disclosure policy.

What if a board member follows up with “I do not approve of AI use for board material”?

This is a values disagreement, not a competence question. Acknowledge the position without abandoning the work: “I understand. The decision in slide twelve is mine and I would land on the same recommendation regardless of how the deck was drafted. I am open to discussing the organisation’s broader AI use policy in a separate forum.” That response respects the disagreement, retains your ownership of the substance, and moves the discussion of AI policy off the meeting agenda.

Can a deck reveal AI use in ways I might not have noticed?

Yes — file metadata can sometimes show which application generated which content, and certain phrasings are recognisable as AI-typical to readers familiar with the patterns. The editorial pass is the safest way to remove the most common signals, but assume that any deck you send to a board could be analysed for AI use if a board member chose to. The honest-when-asked approach removes the risk of being caught in a denial and keeps your credibility intact regardless of what the metadata or phrasing might reveal.

The Winning Edge — Thursday newsletter

Every Thursday, The Winning Edge delivers one structural insight for executives presenting to boards, investment committees, and senior stakeholders. No general tips. No motivational framing. One specific technique, one executive scenario, one action. Subscribe to The Winning Edge →

Next step: write down your three-part response now, before the question is ever asked. Confirm sentence. Ownership sentence. Verification sentence. Read it aloud. Adjust until it sounds like you. The pre-built response is what holds when the live moment arrives.

Related reading: Why AI-generated slides look generic — and the editorial pass that prevents the AI-use question.

About the author. Mary Beth Hazeldine is Owner & Managing Director of Winning Presentations Ltd, founded in 1990. With 24 years of corporate banking experience at JPMorgan Chase, PwC, Royal Bank of Scotland, and Commerzbank, she advises executives across financial services, healthcare, technology, and government on structuring presentations for high-stakes funding rounds, approvals, and board-level decisions.