Leaner Teams, Harder Truths
What Quant Finance Told Me at Future Alpha

Two of our four panelists were pulled away by urgent business before the session began — the geopolitics of April 2026 have a way of doing that. What followed, with Nan Xiao of Greenland Capital and Milind Sharma of QuantZ Capital, was a more unfiltered conversation than I expected. Twenty minutes on the Alpha X Stage. No slides. No prepared remarks. Just two practitioners and six questions the quant finance industry is not yet answering honestly.
Q1 — How are AI and automation changing the makeup of quant research teams?
Milind was direct: the traditional pipeline — PhD, junior researcher, signal production — is compressing. His “Hedge Fund in a Box” thesis is not theoretical; it is the architecture he is actively building. Nan offered the counterweight from Greenland Capital: AI deployed across fifteen-plus teams at senior-analyst level output, but alongside existing researchers rather than in place of them. Her cost reduction was not headcount elimination — it was amplification. Two legitimate models, running simultaneously inside the same industry.
The question is not which model wins. It is whether your governance infrastructure can support the one you are choosing.
Q2 — Which skill sets are becoming redundant — and which are now essential?
Milind put the job market reality plainly: junior roles in data cleaning, routine backtesting, and template-based signal generation are already being absorbed. The compression is not coming — it is here. Nan’s answer cut in a different direction entirely: the most irreplaceable skill on her team is data quality judgment. AI at scale does not solve data problems. It amplifies them silently. The humans who understand data provenance, failure modes, and what the model cannot see remain load-bearing.
The most dangerous assumption in quant finance right now is that smarter models fix bad data. They do not. They hide it faster.
Q3 — How do you balance team reduction with innovation, oversight, and explainability?
Nan was precise: explainability is not a compliance checkbox you retrofit at review time — it is a design constraint you build from the start, at the data governance layer. Milind framed it as a sequencing problem: firms that cut headcount before building the oversight layer discover their blind spots during a live drawdown, not before.
Team reduction is a cost decision. Oversight failure is an existential one. Conflating them is how firms get into serious trouble.
Q4 — How do you interpret the “tsunami of dis-intermediation” — where LLMs and agents now sit between data, models, and decision-makers?
Milind’s thirty years on trading desks gave him the historical frame: dis-intermediation is not new to finance, but the speed and the layer at which it is now happening is. LLMs sitting between raw data and Portfolio Management (PM) level decisions compress what used to require three or four human handoffs into milliseconds. Nan’s concern was more specific: when the intermediary is an agent, accountability becomes distributed. The firm knows the output. Fewer and fewer people understand the chain that produced it.
Human intermediaries were implicit drift detectors. Remove them and you lose the error-correction mechanism — the PM who notices signals feel off. Agentic systems do not notice. They compound.
Q5 — What is the actual ROI of replacing headcount with intelligence tools?
I took this question personally — it is the one I was asked to lead, and the one where I had live client data to bring rather than industry averages. Most firms aren’t calculating Return on Investment. They’re relying on Rhetoric on Investment. Studies from McKinsey to MIT confirm the majority of organizations with AI projects cannot even quantify their returns — and those that try make a naïve assumption: that net benefits over net costs operate in a linear world. They don’t. Revenue and cost data are overwhelmingly non-linear, and ROI in isolation is meaningless without a risk calculation to compare it against. A 20% efficiency gain that carries hidden drawdown risk is not the same as one that doesn’t.
This is fundamentally an operational research problem — and data science has the tools to solve it properly. Bayesian optimization, causal inference frameworks, and reinforcement learning-based resource allocation can all frame workforce and capital deployment as genuine optimization problems rather than spreadsheet approximations. Only 10 to 15% of companies running AI projects achieve meaningful, defensible ROI. The difference between those firms and the rest is not which tools they use. It is whether they treated the question rigorously from the start.
Every ROI figure from AI in finance measures what was saved. Almost none measure what was lost in the failure modes nobody reported.
Q6 — How do you validate the decisions made by AI agents operating autonomously inside a fund?
This question generated the most energy — both on stage and in the conversations that continued for thirty minutes after the session at the edge of the platform. Nan carried it with clarity: validation of autonomous agents requires knowing data provenance, decision logic, and failure conditions before deployment, not after. Standard model risk management frameworks were designed for deterministic models. Probabilistic agents violate that assumption at every level. The absence of a dedicated model validator on the panel was felt most here.
Autonomous agents in finance are not a technology question. They are an accountability question. And as of April 2026, most funds do not yet have an answer.
About the Author
John Thomas Foxworthy is the Founder of the Global Institute of Data Science (GIDS), officially sponsored by Carnegie Mellon University’s Heinz College. He serves as an AI/ML instructor at Caltech CTME and UC San Diego Extended Studies, and operates as a Fractional Chief AI Officer for enterprise clients.

