Graduates who are heavily influenced by AI are not particularly appealing for finance jobs due to their lack of depth in ideas.
While artificial intelligence is revolutionizing the financial sector, some companies are starting to resist a mounting trend: graduates who depend too much on AI tools without showcasing deeper analytical skills.
A report from The Financial Times highlights this issue through insights from seasoned finance professionals, including a financier in New York who described the 2025 interns at his company as the first cohort of “true AI natives.” These students, raised on digital platforms and generative AI tools, initially seemed highly competent during the hiring process.
However, the financier noted that challenges arose when senior executives scrutinized their ideas more thoroughly. Although their presentations and results appeared polished, many of their responses reportedly lacked depth, creativity, and independent reasoning. Consequently, there was a decrease in return offers, with hiring priorities shifting toward candidates possessing stronger critical thinking abilities, including those with humanities backgrounds.
Finance firms seek more than just AI skills
The finance industry as a whole continues to aggressively invest in AI. Major companies like JPMorgan and Visa are increasingly branding themselves as technology-driven entities, while Nvidia recently indicated that most finance leaders view AI as essential for future growth.
Despite this enthusiasm, actual results in practice are varied. A recent survey from Cambridge Judge Business School found that while over 80 percent of financial organizations are now using AI, most applications are still concentrated on back-office functions rather than key strategic roles.
Additionally, the same survey revealed that many companies find it challenging to evaluate AI’s true business impact. Only a small fraction reported significant profit improvements, while a substantial number indicated that AI had not noticeably changed their financial landscape.
This gap is beginning to affect hiring practices and workplace expectations. Employers are increasingly seeking individuals who can critique AI-generated results, spot flaws, and exercise independent judgment, rather than merely those who can proficiently use AI tools.
Implications beyond the finance sector
This trend signifies a wider transformation occurring across various industries. While AI competencies are becoming standard, organizations are beginning to differentiate between individuals who merely rely on AI for solutions and those who can think critically in conjunction with it.
For students and emerging professionals, this might alter what employers prioritize the most. While technical expertise and familiarity with AI remain crucial, they are no longer sufficient on their own. Skills in communication, reasoning, adaptability, and a deeper understanding of subjects are becoming equally vital in an AI-driven work environment.
Simultaneously, regulatory bodies are growing more cautious about AI’s functions in finance. Issues regarding AI hallucinations, cybersecurity risks, and automated decision-making are prompting financial regulators to establish safer testing frameworks and oversight protocols.
The significant challenge ahead
There seems to be a growing agreement in finance that AI serves best as a tool to enhance human thinking rather than replace it. As adoption increases, the firms that are likely to gain the most benefits may not be the ones utilizing the most AI, but those that merge automation with employees capable of strong judgment and original analysis.
This transformation could reshape hiring trends in the coming years, potentially explaining why some finance firms are becoming less enthusiastic about graduates who are overly influenced by AI.
Other articles
Graduates who are heavily influenced by AI are not particularly appealing for finance jobs due to their lack of depth in ideas.
Financial companies are reevaluating AI-native graduates amid rising worries about superficial analysis, inadequate critical thinking skills, and an overreliance on generative AI tools.
