Imagine kicking off a project. You log into three dashboards that should be reporting the same data. Instead, each one demonstrates a different figure. These findings are technically “correct” in their own context, but together they create confusion rather than clarity. What’s missing is the alignment that transforms decentralized data into a single source of truth.
This paradox is at the heart of many data mesh implementations. Decentralization promises autonomy and speed, but without shared modeling practices it often produces the opposite: fragmentation. In her book Data Mesh: Delivering Data-Driven Value at Scale, Zhamak Dehghani reframed the challenge of scaling data not as a technical problem, but as an organizational design problem. Her solution was to shift responsibility from a central data team to domain-oriented teams, giving those closest to the business more context. The result was greater agility, accuracy, and relevance.
Central to this philosophy is the principle of autonomy. Yet in practice, autonomy is often misinterpreted as isolation. In many organizations, modeling has become a domain-by-domain activity conducted without shared reference points or collaborative structures. This may accelerate delivery in the short term, but over time it leads to semantic drift, duplicated effort, and a steady erosion of trust in the data itself.
The real question is whether independence and coherence can coexist. Should data mesh teams still model together?
How We Got Here
The conversation about whether to model together or apart isn’t new. In the enterprise data warehouse era of the 1990s and early 2000s, modeling was tightly governed. Every structural change required lengthy review cycles, ensuring consistency but severely limiting agility. This control-first approach gave way in the 2000s and early 2010s to semantic layers in business intelligence tools, where products like BusinessObjects, Cognos, and later Looker abstracted the warehouse schema. While these semantic layers allowed for some decentralization, they still largely depended on centralized oversight.
The late 2010s saw the rise of analytics engineering, with tools like dbt enabling analysts and analytics engineers to create and maintain domain-aware models closer to the business. This brought agility and responsiveness but also opened the door to divergent practices across teams.
Now, in the 2020s, data mesh has taken decentralization further by putting modeling authority directly into the hands of domain product teams. Some argue that what began as empowerment has shifted to isolation.
The Appeal of Modeling in Isolation
For many organizations, the appeal of domain-level modeling independence is compelling. Removing the need for approval from a “modeling council” or central architecture group eliminates major bottlenecks. This approach also aligns with Conway’s Law, which states that systems tend to mirror the communication structures of the organizations that design them. If domains are organizationally independent, it feels natural for their data models to follow suit. In some cases, this independence has been framed as a form of empowerment — a way of letting domain experts fully own their data products without interference.
The Risks of Pure Autonomy
Pure autonomy comes at a cost. Without a mechanism for aligning definitions and modeling conventions, organizations accelerate what can be called semantic entropy. As each domain develops its own view of key entities, subtle differences begin to pile up. “Customer” in Sales may mean something slightly different than “Customer” in Support — and without reconciliation, both definitions drift further apart. When these products eventually need to interoperate, the incompatibilities create friction.
The challenges don’t stop at semantics. Data products that should connect seamlessly often require expensive transformation layers. Different domains may unknowingly solve the same modeling problem in parallel, duplicating effort. And perhaps most damaging, conflicting dashboards surface metrics that are all technically “correct” within their own definitions but impossible to reconcile. Over time, this erodes trust. Executives and stakeholders stop questioning the numbers and start questioning the credibility of the entire data organization.
The Physics of Scale in Data Mesh
From a systems theory perspective, decentralization increases fragmentation. In any complex system, more autonomous agents making independent changes means more variability over time. In physics, entropy is countered by energy input; in organizational systems, that “energy” comes in the form of alignment efforts. Shared modeling is one of the most efficient ways to inject that alignment. It acts as a stabilizing force that keeps domains from drifting too far apart while still allowing them to operate at their own pace.
The Case for Shared Modeling
The goal isn’t to bring back the bottlenecks of centralized modeling, it’s to create a collaborative modeling fabric that connects domains without constraining them. This fabric provides a shared semantic layer the entire organization can rely on, ensuring that foundational concepts like “Revenue” or “Churn Rate” mean the same thing everywhere.
With that foundation in place, interoperability comes by design. Data products from different domains can integrate smoothly, without expensive rework. It also lowers the cognitive burden across the business: executives, analysts, and engineers alike can navigate the data landscape without needing to relearn each domain’s unique modeling conventions.
The Economics of Modeling Together
The hidden costs of not modeling together are significant. When definitions drift apart, organizations end up investing heavily in reconciliation projects to align data after inconsistencies are discovered. Cross-domain product launches can be delayed by incompatible models that require additional engineering work before integration is possible. And perhaps most damaging, conflicting metrics erode stakeholder trust, reducing the perceived value of the data team’s work.
These costs compound over time. The longer semantic drift is left unchecked, the more expensive it becomes to correct. This is why the ROI of alignment is so high: a modest investment in shared modeling processes early on can prevent exponentially larger expenses later.
The Future of Collaborative Modeling
Looking ahead, several trends are likely to make shared modeling more impactful. AI-assisted modeling will be able to suggest definitions, detect inconsistencies, and automatically link related concepts across domains, lowering the coordination cost. Internal “model marketplaces” may emerge, allowing domains to publish and reuse vetted models much like open-source components and automated governance triggers will flag potential overlaps or divergences as soon as they occur, rather than months after problems surface.
Ellie.ai is already building toward this future. By combining domain-driven modeling environments with embedded glossary and metadata services, Ellie makes it possible to maintain autonomy without losing the benefits of an integrated semantic backbone.
Deciding Your Level of Collaboration
For data leaders, the decision is not binary. The right balance between autonomy and shared modeling depends on three factors. First, how interdependent are your domains? If data products rely heavily on one another, tighter alignment is necessary. Second, what is your tolerance for metric variation? In regulated industries, even small differences can be unacceptable. Third, how fast is your organization changing? Rapid change increases the risk of semantic drift, making shared modeling even more valuable.
Ellie.ai: Autonomy Without Anarchy
Data mesh is more than an architectural pattern, it’s a socio-technical shift. The social side, how teams align on meaning, is just as critical as pipelines, APIs, or cloud infrastructure. Decentralization without coordination isn’t empowerment; it’s fragmentation at scale. Shared modeling, supported by the right tooling, is the connective tissue that keeps the mesh coherent.
Ellie.ai puts this principle into practice by giving each domain the freedom to model independently while connecting those models through a shared glossary, metadata repository, and versioned visual representations. Changes propagate in a controlled way, definitions stay discoverable across the organization, and consensus becomes something teams choose because it’s easy, not because it’s mandated.
With Ellie.ai, modeling together is no longer a trade-off between speed and trust — it’s the way to achieve both. Interested in building better data products? No additional fees, commitment, or software installation required. Book a demo today.