AI analytics agents require limitations rather than increased model size.

AI analytics agents require limitations rather than increased model size.

      Imagine a VP of finance at a major retailer. She poses a straightforward question to the company’s new AI analytics agent: “What was our revenue last quarter?” The response comes back promptly.

      Assured.

      Clear.

      Incorrect.

      The core of EU tech: The latest updates from the EU technology landscape, a tale from our seasoned founder Boris, and some dubious AI-generated art. It's free, delivered weekly to your inbox. Sign up today!

      This precise situation occurs more often than many organizations are willing to acknowledge. AtScale, which helps businesses create governed analytics environments and ensure semantic consistency, has identified that merely increasing model parameterization cannot effectively tackle the AI governance and context challenges faced by enterprises.

      When AI systems query data that is inconsistent or ungoverned, adding complexity to the model doesn't solve the issue; it exacerbates it. Companies across various sectors have swiftly worked to implement agentic AI, introducing systems that analyze data, offer insights, and initiate automated workflows. This trend has led AI models to evolve, allowing for rapid responses through larger parameters, enhanced computing power, and additional features. The underlying belief has been that as long as a model is sufficiently large, it will eventually yield reliable results.

      However, there are signs that this belief may be flawed. Recent research from TDWI revealed that nearly half of the respondents considered their AI governance initiatives to be either inexperienced or very inexperienced. This issue likely relates more to data lineage and the business definitions that underpin these models rather than the models’ functionality.

      Why larger models don't resolve governance issues: The AI industry often operates under the unexamined assumption that enhanced performance arises from more advanced models, which will eventually correct their own errors. In enterprise analytics, this assumption can quickly unravel.

      While increased scale may broaden a model's reasoning capabilities, it does not automatically enforce the agreed-upon definition of gross margin. It does not rectify metric inconsistencies that have persisted across different dashboards for years. Additionally, it does not independently produce traceable lineage.

      Governance challenges do not resolve through scaling. Issues such as business rules buried in specific tools, inconsistent definitions between teams, and results lacking an audit trail are structural problems that a larger model cannot rectify. Instead, it generates unreliable answers more fluently.

      At AtScale, a recurring theme among our clients is that when inconsistent data definitions follow organizations into their AI layer, the issues do not simply vanish; they often escalate, typically with greater speed and less transparency than the previous layer could provide.

      Performance and governance are distinct responsibilities. A model performs reasoning. A governance layer delineates what the model reasons over, limits how it applies business logic, and ensures outputs can be traced to a source of record. One cannot replace the other.

      The real risk: Unconstrained agents in enterprise settings. The challenge with AI agents is rarely the model itself, but rather the data it utilizes and the visibility of its actions.

      In the context of common data, AI agents may interpret data differently across systems. In large organizations, even minor differences in definitions can yield divergent results. Structural risks generally arise from four main sources:

      Agents draw from sources where the same metric may hold different meanings for different teams, leading to unclear data definitions.

      Metrics from various departments may not align – two agents might yield two different answers, but there's no clarity on which is correct.

      Ambiguous reasoning can produce outputs without a clear lineage explaining how decisions were made.

      Audit gaps emerge when outputs cannot be traced back to a governed source, leaving no reliable means to catch errors, assign accountability, or implement corrections.

      These issues are not indications of AI failure; rather, they reflect that the infrastructure supporting AI has not kept pace.

      What guardrails really represent in AI analytics: Guardrails are often seen as confining. However, in many instances, they are the very conditions that allow AI agents to function with greater confidence.

      Guardrails can facilitate alignment of AI-generated outputs with established business logic. They also establish a framework in which autonomous agents can operate; thus, as autonomy increases, reliability does as well. In analytics, guardrails typically take several specific forms:

      Unified data definitions: A single understanding of terms such as revenue, churn, or margin that is consistent across all systems.

      Business logic constraints: Established rules governing how calculations should be executed, regardless of the tools or agents involved.

      Visibility of lineage: The ability to trace the origin of any output produced.

      Access controls: Defined permissions outlining what data an agent may query.

      Standardization of metrics: Consistent definitions that apply across departments and platforms.

      The intent is not to hinder AI performance, but rather to provide AI with a solid foundation on which to operate.

      The role of the semantic layer as a framework of constraints: A semantic layer acts as an intermediary between data and the applications or AI agents that utilize it, defining business concepts, implementing logical processes, and providing a common terminology for all applications and AI agents to reference.

      A semantic layer does not alter or duplicate data; it clarifies

AI analytics agents require limitations rather than increased model size. AI analytics agents require limitations rather than increased model size.

Other articles

Parallel secures $20 million to implement AI agents in healthcare facilities. Parallel secures $20 million to implement AI agents in healthcare facilities. Parallel has secured $20 million to implement AI agents that automate medical coding and billing directly within existing hospital software systems. Hydrogen fuel vehicles never gained widespread popularity, but they could potentially lead to the development of next-generation long-range drones. Hydrogen fuel vehicles never gained widespread popularity, but they could potentially lead to the development of next-generation long-range drones. Hydrogen has not been effective in cars, but researchers in Norway have developed a drone powered by it, replacing batteries with a fuel cell to manage long-distance tasks such as inspecting power lines. Microsoft's MAI-Image-2 ranks among the top three AI image generators. Microsoft's MAI-Image-2 ranks among the top three AI image generators. Microsoft's MAI-Image-2 launches at #3 on Arena.ai's text-to-image leaderboard, trailing Google and OpenAI, and starts to be implemented on Copilot. Hydrogen fuel vehicles never gained widespread popularity, but they could potentially lead to the development of next-generation long-range drones. Hydrogen fuel vehicles never gained widespread popularity, but they could potentially lead to the development of next-generation long-range drones. Hydrogen has not been effective for use in cars, but scientists in Norway have developed a drone powered by hydrogen, replacing batteries with a fuel cell to support long-distance tasks such as inspecting power lines. DoorDash introduces Tasks. DoorDash introduces Tasks. DoorDash has introduced Tasks, an independent app that compensates Dashers for filming home chores and recording audio to help train AI models. AI analytics agents require boundaries instead of an increase in model size. AI analytics agents require boundaries instead of an increase in model size. AI analytics agents require boundaries rather than larger models. Discover why managed data, common definitions, and semantic layers are more important than the size of the model.

AI analytics agents require limitations rather than increased model size.

AI analytics agents require safeguards rather than larger models. Discover why governed data, common definitions, and semantic layers are more crucial than the size of the model.