Test what it concludes before investors do.
Large language models (LLMs) are no longer an emerging technology in capital markets. They’re already being used by investors, analysts, advisers and internal teams to discover, summarise and compare corporate information at speed.
What’s changed is not that people have stopped reading and referencing annual reports. It’s that AI increasingly intermediates the first interaction.
As we’ve been discussing with clients through our work on agentic AI and corporate communications, these systems are now acting as a new type of stakeholder, querying, screening and framing organisations before a human analyst ever opens a PDF or browser page.
That shift has important implications for how corporate information behaves in practice.
How AI actually processes annual reports
A persistent misconception is that AI systems ‘read’ annual reports like we do.
In reality, modern LLM-based systems:
-
search across accessible sources;
-
retrieve small fragments of content;
-
test consistency across those fragments and
-
synthesise an answer that prioritises coherence over completeness.
Annual reports remain the most authoritative source, but they are no longer the only framing layer.
This distinction matters.
Because when AI systems are constrained to the annual report itself, most UK-listed reports perform strongly. They’re well structured, disciplined and internally consistent. Our testing consistently shows that issues tend to be mechanical rather than conceptual: dense tables, layout artefacts or adjacent-year confusion.
That is a reassuring finding.
It also reframes the challenge.
The real risk sits outside the annual report
Where reliability starts to break down is not usually inside the report, but across the wider corporate communications ecosystem.
AI systems don’t access documents because they are important to you. They access what is:
-
easiest to discover;
-
simplest to parse; and
-
most consistently reinforced across channels.
As a result:
-
summary pages can outweigh reports in discovery;
-
supporting content can subtly reframe strategy or risk; and
-
metrics can appear without the context that gives them meaning.
From an AI’s perspective, this doesn’t look like nuance. It looks like contradiction.
No amount of optimisation to a single PDF can fully mitigate that.
This is an optimisation problem, not a reinvention exercise
One of the most important messages for teams is this:
You do not need to redesign your annual report for AI.
UK annual reports are already strong inputs for LLMs. The opportunity lies in optimisation:
-
clearer structure and labelling;
-
consistent definitions across channels;
-
proximity of metrics and explanations; and
-
deliberate handling of silence on sensitive topics.
These changes benefit human readers just as much as machines, and they reduce the burden on any one document.
A critical development: we’re testing reports against AI while content is still in production
One of the most significant shifts we’ve developed, and one that clients have particularly welcomed, is the ability to run AI tests while reporting and communications are still in production.
Because our approach is grounded in current, production-grade LLM models, we can test:
-
draft annual reports;
-
pre-publication results materials; and
-
supporting digital or narrative content
before anything is made public.
This turns AI from a post-publication risk into a pre-publication quality check.
In practice, this allows teams to:
-
identify where context may be lost when content is summarised by AI;
-
spot inconsistencies across draft channels early; and
-
make targeted, low-effort adjustments before sign-off.
Clients have described this as a natural extension of existing review and assurance processes, not a new burden, but an additional lens that reflects how information continues to be consumed by the modern stakeholder.
AI isn’t replacing judgement. But it is compressing time and shaping first impressions.
Why agentic stakeholders raise the stakes
In an environment of agentic stakeholders:
-
inconsistencies surface faster;
-
misalignment propagates more widely; and
-
first-pass interpretations increasingly influence what humans focus on next.
This isn’t about compliance failure. It’s about loss of narrative control at the point of discovery.
Organisations that engage with this early aren’t reacting to problems – they’re staying ahead of them.
Our approach: testing reality, not theorising risk
This is why we have built and developed systems to test both:
-
how an annual report performs when interrogated directly by modern AI; and
-
how the wider corporate communications ecosystem behaves when AI goes looking for answers.
For most organisations, the outcome is reassuring.
For some, it highlights specific, low-effort optimisations that materially reduces risk in the areas of accuracy, hallucinations and reasoning.
We’re ready to educate and test your communications
We’re currently offering to educate and provide testing on a complimentary basis for a limited number of organisations.
We’re already working with teams to demonstrate and share our insights, learning, methodology and how their communications performed under our AI testing.
We’re doing this for three reasons:
-
Confidence
Our experience shows that most annual reports and the wider communication ecosystem already stand up well. The value lies in understanding where, if anywhere, optimisation is genuinely needed. -
Quality control at the right moment
Running these tests while content is still in production allows teams to address issues or minimise risks before publication, not explain them afterwards. -
Shared learning
Agentic technologies are evolving rapidly. By working closely with organisations today, we continue to refine our understanding of how these systems behave and we feed those insights directly back into client advice.
At the same time, we’re already educating and advising our existing clients on how to optimise both their reporting and their wider communications ecosystem in light of these changes.
This is not about futureproofing for a hypothetical risk. It’s about understanding and shaping how your organisation is already being interpreted today.
Test what AI concludes before investors do.
We’re offering complimentary AI testing for a limited number of organisations. See how your reporting performs under interrogation by modern LLM systems, and identify where small refinements could materially reduce interpretative risk.