The Last Horse in Investments
Most of the investment industry is using AI to build faster horses, not to reinvent the customer relationship. A talk at the TISA AI Conference on why the next two years will separate the reinventors from the rest.
Earlier this month Yellow Radio’s John Kinson presented at the TISA AI Conference in London, sponsored by Altus Consulting (part of Accenture). The talk, “The last horse in investments: AI, trust, and the reinvention imperative”, explored a question that the investment industry has been circling for over a year now: what should we actually be building with AI?
The answer, for the vast majority of firms, turns out to be the wrong thing.
Horses everywhere
There is a famous photograph of 5th Avenue in New York City, taken at Easter 1900. The street is packed with horses and carriages. There is one automobile. Thirteen years later, the same photograph shows the opposite: cars everywhere, and one horse.
The quote commonly attributed to Henry Ford, “If I had asked people what they wanted, they would have said faster horses”, is probably apocryphal. But the insight is real. The Model T was not a faster horse. It was a reinvention of transportation itself.
This framing is useful when thinking about how the investment industry is responding to AI. There are essentially three strategies being pursued:
The cheaper horse. Cut costs. Automate existing processes, reduce headcount, deliver the same capability for less money. This is bottom-line OpEx reduction. About 10% of firms are here: automating KYC, shrinking call centres, streamlining back-office operations.
The faster horse. Increase capacity. Serve more customers, generate more revenue, deliver the same capability at greater volume. This is top-line revenue growth. About 81% of firms are here: faster reports, more personalised communications, AI-assisted everything.
The Model T. Reinvent entirely. New capabilities, new customer relationships, new business models. Fundamental transformation. About 9% of firms are here: reimagining what investment services actually means, building financial wellbeing rather than financial products.
The 9% are creating a performance frontier that the other 91% will struggle to reach. The cheaper horse buys you three to five years of survival. The faster horse lets you compete for perhaps seven. Only the Model T defines the future.
The advice gap is the opportunity
As we explored at the TIN AI conference last October, the investment industry’s instinct is defensive: how do we protect ourselves from AI? But the numbers tell a different story.
Only 9% of UK adults have ever received regulated financial advice. 70% of people with more than £5,000 in savings have never considered investing. By 2035, the UK is projected to have 27 million scattered pension pots, up from 8 million today.
AI could solve this. It has not. Not because the technology does not exist, but because the industry is building the wrong thing. Most firms are using AI to make their existing processes faster or cheaper. Very few are asking whether those processes are the right ones in the first place.
We are still in the engineering era
One of the central arguments of the talk was about timing. Every major technology platform has followed a roughly ten-year pattern: an engineering phase where the foundational technology matures, followed by a design phase where the real commercial and social value gets created.
The World Wide Web went through this from roughly 1994 to 2004. HTTP, web servers, browsers, and frameworks had to be built, standardised, and made reliable before Web 2.0, social media, and the platforms we now take for granted could emerge. Mobile went through the same cycle, from the late 1990s through to the late 2000s and the iPhone App Store.
AI is at the tail end of its engineering phase. The transformer architecture dates to 2017. ChatGPT’s popularity has led many people to believe we are already in the design phase, the era of mass commercial exploitation. We are not. New models are released monthly. Frameworks like LangChain, CrewAI, and AutoGen are evolving rapidly. New protocols such as MCP and A2A are still in early adoption.
The strategic implication is significant. What you build today may be obsolete in 18 months. Architecture matters more than implementation. Bet on abstractions, not specific vendors. And open-weight models, as we argued in The Battle for Sovereign AI, give you optionality that proprietary lock-in does not.
Demo easy, production hard
There is a seductive quality to AI demos. “Look, it works! Built in a weekend.” And it probably did work, in a controlled environment, with clean data, and no edge cases.
Production is another matter entirely. Beneath the demo surface sits integration with real systems, governance and audit trails, trust and safety controls to prevent misuse, observability so you know what the system is actually doing, guardrails to keep it within acceptable boundaries, and drift monitoring to catch when models degrade over time.
The economics are also changing in unexpected ways. Capital expenditure for software development is falling, with AI coding assistants accelerating delivery by 30-50% and proofs of concept being built in days rather than months. But operating expenditure is becoming highly variable and hard to predict. Token costs depend on usage patterns, agentic loops can spiral unexpectedly, and API pricing spans a 60x range from $0.25 to $15 per million tokens. When finance asks “what will this cost next year?”, the honest answer is: we do not know yet.
And then there is the fundamental shift from deterministic to probabilistic software. Traditional software gives the same output for the same input. You test it once and trust it. Time, cost, and quality are predictable. AI-powered software gives different outputs for the same input. You test constantly and trust cautiously. The QA question is no longer “did it pass the test?” but “is it good enough, often enough?”
For financial services, where accuracy is not optional, this is a genuinely hard problem. What level of accuracy is acceptable for financial advice? How do you test something that produces different results each time? Who is liable when the AI gets it wrong?
The legacy reality
These challenges are compounded for life, pension, and investment firms by the legacy estates they are working with. Mainframes, COBOL, decades of accumulated technical debt. Data quality that varies wildly, inconsistent, incomplete, and siloed across systems. Integration nightmares with APIs that do not exist and batch processes from 1995. And modernisation paralysis, where every innovation project competes with the need to keep the lights on.
AI makes all of this harder, not easier. AI is hungry for clean, connected, contextual data. Users now expect real-time personalisation, not batch processing. Trust requires traceability and explainability, which requires data lineage. The unpleasant reality is that many firms may need to fix their data before AI can help them at all.
A pragmatic way forward
Despite all of this, there is a pragmatic path. On the legacy side: start at the edge, not the core, building new capabilities alongside existing systems rather than replacing them. Use AI to improve and enrich data, not just consume it. Build APIs incrementally, prioritised by AI value rather than system age. And ring-fence innovation with a separate team, separate budget, and a clear mandate.
On the AI side: keep the scope narrow and the data manageable. One use case, one data domain, prove value. Keep humans in the loop, which reduces risk, buys time, and builds trust. Start simple on explainability and add rigour as you scale. And run parallel tracks, fixing foundations and experimenting with AI simultaneously.
You do not need to solve everything before you start. Start narrow, learn fast, expand carefully.
The JPMorgan coach AI is a good example of what works. During the April 2025 market volatility, it delivered 95% faster research retrieval and a 20% year-over-year increase in sales. The patterns that made it successful were domain specialisation (narrow focus beats general-purpose), human checkpoints (sequential workflows, not autonomous ones), and clear handoff protocols (AI augments, humans decide). The realistic assessment is that agentic AI currently works well for tasks that take a human 30-40 minutes, with process reinvention. Complex multi-agent systems still have a 60-90% failure rate.
The six-month window
The regulatory timeline is concentrating minds. The FCA’s Targeted Support framework takes effect in April 2026, creating a pathway for personalised guidance at key moments without the full advice burden. This is the framework that makes AI-assisted financial guidance commercially viable, but only for firms that have built the capabilities to deliver it.
The EU AI Act’s high-risk provisions follow in August 2026. Consumer Duty requirements continue to raise the bar daily. And 96% of industry participants believe AI can revolutionise client servicing, but only 41% are scaling it as core business. That gap is where competitors will eat you.
There is also a certification paradox that the industry has barely begun to address. Human advisors require certification. They pass exams. They are supervised. They are accountable. How do you certify an AI agent doing regulated work? Who is liable when it gets things wrong: the firm, the model provider, or the deployer? How do you supervise an AI when traditional training and competence frameworks do not fit? What does “suitability” mean when recommendations are probabilistic? How do you evidence competence when the AI changes monthly?
The two-year horizon
Reinvention, the Model T response, means more than doing existing things better. It means new modalities entirely. Today’s investment services are delivered through web portals, mobile apps, text interfaces, and screens. The reinvention is voice (natural conversation, in any language), video (show me, don’t tell me), ambient (glasses, watches, home devices), and proactive (AI that reaches out at the right moment, not AI that waits to be asked).
The 70% of people who have never considered investing might engage with a voice assistant that checks in after payday. The 27 million scattered pension pots might consolidate through a smart glasses notification. The interface is not the web portal. The interface is the moment.
A closing thought
During the Q&A, a question was raised that the conference had not expected. If AI can genuinely democratise financial services, and if the cost of delivering personalised financial guidance drops towards zero, what does that do to the value of money itself? Not the technical value, not the monetary policy question, but the human experience of managing, growing, and worrying about money.
The investment industry exists, in large part, because money is complicated and people need help with it. If AI makes that help abundant and essentially free, the industry will need to articulate a very different value proposition. Not “we help you manage your money” but “we help you understand what your money is for.” That is a fundamentally harder question, and one that no amount of technology can answer on its own.
The feedback from the conference was encouraging. Several attendees commented that it was the most thought-provoking session of the day, and that the honest acknowledgement of what is genuinely hard, alongside what is genuinely possible, was refreshing. In an industry that oscillates between AI hype and AI anxiety, a pragmatic middle ground seems to be what people are looking for.
The last horse on 5th Avenue did not disappear because horses were bad. It disappeared because the car was better at the job that needed doing. The question for the investment industry is not whether AI will transform financial services. It is whether your firm will be the car, or the horse.
Got comments?
We'd love to hear your thoughts on this article.
Thanks for your feedback
We appreciate you taking the time to share your thoughts.
Something went wrong. Please try again, or email us directly at contact@yellowrad.io.