6 April 2026 – Monday
6 April 2026 – Monday

Building for Where Finance Is Going: A conversation with Csanád Váradi.

Will AI replace financial analysts? What should universities do to prepare students for a changing job market?In this interview with Csanád Váradi, founder of Aevorex, we explore how AI is reshaping financial research, education, and the skills that will matter in the years ahead.

Csanád Váradi is a second-year Hungarian student in the International Economics and Finance bachelor at Bocconi University. A member of quant and equity research associations at our university, but without a technical background, he turned to AI to bridge the gaps in his understanding of data, metrics, and financial concepts and realised that, if the early version of AI models were already able to turn him into a technical operator, their natural evolution could have ground-breaking consequences. As access to knowledge becomes increasingly universal and solutions can be replicated at near-zero cost, the competitive edge in finance no longer lies in technical skills alone, but in the ability to use available tools to eliminate frictions others take for granted. This insight would later shape Aevorex, the financial research tool he is currently building.

In this interview, we explore his venture, and the role of AI in reshaping financial research, education, and the job world.

What is Aevorex? What is your mission, and what problem did you notice in the financial sector that you intend to resolve with your product? In what way does the platform differentiate itself from greater terminals like Bloomberg or other competitors?

Aevorex takes its name from two Latin words: aevo (era, age) and rex (to lead, to guide). For me, it reflects the belief that our generation has both the tools and the responsibility to consciously shape the future ahead. I believe the rules of finance will change drastically over the next decade, and my ambition is to be part of that transformation rather than observe it from the sidelines. Aevorex is my attempt to build for where finance is going, using what I am learning today as the foundation.

The product was born from a simple realization: fully manual financial research is increasingly inefficient, while today’s AI agents and AI browsers remain unreliable for serious financial decision-making. At the same time, the industry is undergoing a structural shift. Much like in content creation, many financial products will be generated semi-automatically at an institutional level. We are likely only two or three years into a transformation that will unfold over the next decade.

From an economic perspective, the implications are clear. If a six-figure analyst spends most of their time gathering data, and that work can be performed with a few thousand euros per year in infrastructure, the value equation changes. The people who will become more valuable are not those executing tasks manually, but those who can manage AI agents, direct them effectively, and verify their output. That is who Aevorex is built for. Its mission is to free up human time for higher-level thinking.

As for Bloomberg, it is not only a pure data company, it is also a premium network. Bankers and traders pay for the terminal because everyone else has one: decades of relationships, an internal chat system, transaction platforms, and status. That is not something you replace.

Aevorex is solving a different problem. If Bloomberg is a Michelin-star kitchen, we are the quality ingredient supplier. Today, high-quality data is widely available at relatively low cost, just as premium ingredients are. Compute, too, has become cheaper than human labor. The question is how to use that computing power to offset expensive working time. Our answer is to provide reliable financial data, optimized for AI agents or computing tools, running on cloud infrastructure and accessible through the tools professionals already use. Our first demo is built around this idea.

Your project has been admitted into programs like NVIDIA Inception, TEF, and Google Cloud for Startups. How did you do that, what does it represent for Aevorex, and what is the specific support you are receiving?

These programs each look for something specific. NVIDIA Inception and Google Cloud for Startups are designed for AI and computing-infrastructure companies; a traditional finance startup would not typically fit their criteria. TEF, on the other hand, is Bocconi’s partner entrepreneurship program, focused on early-stage ventures. What sets Aevorex apart is that it sits at the intersection of both worlds: financial domain expertise and technical infrastructure. That combination is likely what brought it onto their radar.

The concrete support is mostly cloud credits, which effectively translate into runway—time that can be spent building the product rather than covering infrastructure costs. But the most valuable aspect has been access to people. Through TEF, I had the opportunity to speak with Andrea Pignataro, founder of ION Group. From his perspective as a company leader, he confirmed what I had been hypothesizing: the banking industry is likely to undergo a structural transformation over the next decade. That conversation reassured me that I was focusing on the right problem.

Where do you see Aevorex in five years, and how do you think its role will evolve as AI capabilities mature, for example with the arrival of autonomous agents?

In five years, I see Aevorex as an established tool for high-level financial research. The core idea is not just to build a product to sell, but a tool I would personally rely on for complex analysis. Ultimately, I want this to further specialize beyond finance into sectors like healthcare, as the technology and data infrastructure mature.

The trajectory is about scaling along three main dimensions. First, data quality: progressively integrating more premium and specialized sources. Second, domain depth: building a stronger team with deeper expertise in specific sectors. Third, compute capability: moving from simple integrations with tools like ChatGPT toward more sophisticated pipelines designed for large-scale analysis. As long as finance relies on unnecessarily manual work, there will be demand for tools that reduce that friction.

On autonomous agents, the direction is technically feasible, and I have thought about it. Some software already moves in that direction. The reason I have deliberately avoided it so far is reliability. In research, the responsibility for decisions remains with the user; in execution, that responsibility shifts to the system. Given the current limitations of AI, I see that as too risky for financial decision-making. That said, the infrastructure we are building could support execution in the future if user demand moves there. It is not a never, but it is definitely a not yet.

The use of LLMs and aggregated datasets can generate bias or hallucinations. What measures did you adopt to guarantee transparency and accuracy with your product?

This is a crucial question, because AI is often over-glorified. Some of its capabilities are genuinely impressive, but I have also seen decision-makers in serious firms use it in contexts where its limitations cause more harm than good. An example is relying on a single AI output for complex or high-stakes tasks: when money is on the line, even 80% accuracy still requires full verification -think of miscalculating an expected interest rate or risk-, often resulting in more work rather than less.

The core issue is that AI systems are fundamentally constrained by their access to data and context. They are blind to information beyond their training cutoff or what their tools can retrieve. Web search helps, but it is far from reliable: models struggle to assess source quality, weigh relevance, and manage large volumes of information within limited context windows. As tasks scale or become autonomous, small errors compound quickly, making them unacceptable for financial decision-making.

Aevorex is being designed around this bottleneck. Instead of asking AI to “figure everything out,” we focus on preparing reliable financial data in a structured format that allows models to load only what is strictly necessary. This reduces context usage and improves reliability. More broadly, I see AI as one tool among others. For example, large-scale data processing is better handled by deterministic systems like cloud analytics engines, while AI can be used to query, interpret, and connect validated outputs.  Overall, the goal is freeing up time an analyst would spend on data processing.

This is also connected to why we deliberately focus on research rather than execution. In research, responsibility remains with the user; in execution, it shifts to the system. Given the current state of AI, the risk of compounding errors in autonomous decision-making is simply too high.

How do you reconcile your studies with your role as a founder? What instruments and habits help you maintain equilibrium?

I don’t follow a fixed routine, because my tasks change constantly and I have to remain flexible. Long weeks are common, but I don’t see burnout as a threat, largely because I am genuinely engaged with what I work on, so it does not feel like sacrifice.

Over time, I have learned to align my motivation and emotional state with my goals. For a long time, I confused objectives with tools. I had to remind myself that I didn’t come to Bocconi to chase grades, but to use the university as a resource for what I want to build my future on. Knowledge is a foundation; grades are only a signal that the knowledge is there. I no longer aim for a perfect GPA, but rather prioritizing what to learn, aligning the knowledge with my goals, and turning it into something concrete, like code, writing, or parts of the product.

Physical and mental health always come before any exam or deadline. I used to aim for perfection in everything. Now I focus on convictions rather than expectations, accepting that failure is statistically likely, and not seeing it as a threat. I’ve also learned that not everything needs to be solved at once. What matters is identifying the most important task at a given moment and staying consistent over time. There are wins and losses, but keeping a long-term perspective helps avoid overreacting to short-term impulses. Some days are inspiring; others are simply about discipline, but both are valuable.

You mentioned seeing university less as an end in itself and more as a resource for what you want to build. How has the Bocconi environment shaped the way you think about entrepreneurship and finance?

I don’t think attending a top university purely for the knowledge it provides is the smartest reason. Much of that knowledge is already accessible, for example Yale puts their courses online for free. Yet people still pay for Bocconi. What I think also matters a lot is the environment, the culture, the practice, and the brand you can borrow early in your career.

Bocconi is strong on the cultural side. Not many places give you friends from Singapore to Brazil. The industry connection is a competitive edge with all these companies on campus, the professors are engaged with the field. Everything is structured around finance and economics, and that inevitably shapes the way students think about problems.

At the same time, Bocconi is not the most entrepreneurial ecosystem in the traditional sense, but more focused on corporate jobs. For a long time, I saw that as a limitation, since I never imagined myself staying in large corporations for long. Over time, I started to see it as an advantage. Being exposed to investment banking, corporate finance, and asset management from the inside allows me to understand the constraints and inefficiencies of that world. If you are building tools for finance, that perspective can become an edge.

What do you think the rise of AI and agentic models mean for the future of work and especially entry-level positions, typically filled by recent graduates?

I think it’s irrational that the market is reducing opportunities specifically for young people. Our generation is the one learning to work with AI and all the new tech. We’re flexible, often more technologically up to date than seniors. I genuinely believe that a large number of today’s students will be able to work more effectively as future seniors than the professionals of today. But only if we get the chance to apply the skillsets we are developing.

What we are seeing instead is caution. Firms are waiting for clear returns on investment, and incumbents naturally want to protect existing positions. As a result, entry-level opportunities are reduced, and power remains concentrated with older generations. This is not a technological inevitability, but a structural choice.

I don’t believe AI will lead to less work overall. It will lead to different work. Skills like system design, architectural thinking, and problem formulation will become more valuable. The issue is that the labor market and the education system have not adapted yet. Diplomas alone are no longer a sufficient signal, especially at the undergraduate level.

On autonomous agents: they can work on small, contained tasks, but running complex multi-step processes often bring compounding errors. Overall, I see them rather as tools, not replacements for now.

Some people say we will not need to work in the future. I disagree with this theory: we still have a lot of important and inspiring social problems to work on, like climate change, social inequality, healthcare, space exploration, global stability. For me, working on these is a common responsibility.
Clearly there is not less work to be done, if we made work optional, we would be misallocating human capital.

You have argued on your socials that higher education still focuses on skills likely to be replaced by AI. Given how slowly universities adapt, what path do you think they should take to stay aligned with the competencies students and firms will actually need?

First, I believe some things about universities are irreplaceable: research, academia, the intellectual community. A society’s ability to innovate depends on having spaces where thinking can be concentrated.

That said, in a world where almost any information is accessible from a low-cost device, expensive tuition needs justification. The value of a university is no longer just knowledge transfer. It lies in culture, practical exposure, industry connection, and the environment it creates. I mentioned this earlier.

What’s under pressure now is the practical side. For example, a lot of universities are more likely to ban AI than to build a syllabus around market realities. It’s risky, untested, and hard to assess. But in the real world, AI is not an option.

Universities should encourage its use while teaching its limitations, risks, and ethical implications. Students often understand them more intuitively than faculty simply because they interact with them daily.

But this doesn’t mean the knowledge behind a diploma is worthless. Critical thinking and domain expertise matter more than ever. They’re just not enough anymore. They’re the bare minimum. In the future, technological fluency will not be an advantage; it will be the baseline.

Overall, I see the higher emphasis on human relationships and individual proactivity towards future as an inevitable consequence of the situation we are living.

share

Suggested articles

Will AI replace financial analysts? What should universities do to prepare students for a changing job market?In this interview with Csanád Váradi, founder of Aevorex, we explore how AI is reshaping financial research, education,…

Trending

Will AI replace financial analysts? What should universities do to prepare students for a changing job market?In this interview with Csanád Váradi, founder of Aevorex, we explore how AI is reshaping financial research, education,…