Artikel

Thirteen Thoughts About the Data Mesh

The concept of Data Mesh is abuzz in the industry right now. Find out why we're so enthusiastic about it.

Martin Willcox
Martin Willcox
15. Mai 2021 5 min Lesezeit
Data Mesh and Why It Matters.
I am having lots of discussions with colleagues and customers about the Data Mesh concept first articulated by Zhamak Dehghani – so many, in fact, that I am currently working on a white paper with a small group of colleagues. White papers are mighty endeavors accompanied by multiple rounds of reviews and feedback. So, whilst we work through that process, I thought that I would take this opportunity to quickly share some headline thoughts with you about why we are enthusiastic about the Data Mesh.

#1 Designing and building analytic solutions is hard, for at least three reasons. First, because requirements are often ambiguous and fluid. Second, because these solutions rely on the repurposing of data that may have been generated by processes – and for purposes – unrelated to the current business problem. And thirdly, because integrating analytic insights into business processes requires complex trade-offs to be discovered, understood and assessed.

#2 For these reasons, successful data and analytic platforms are – and arguably always have been – constructed incrementally and in stages. This is why successful data-driven organisations focus on the rapid delivery of data products aligned with real-world requirements. 

#3 Data practitioners have been relatively slow to adopt Agile software development methods – but where these methods are adopted and are combined with automation tools and DevOps processes, we have often seen 10X improvements in time-to-market for data products. This is the motivation for the development of Teradata’s DataOps frameworks and tooling.

#4 The Data Mesh concept and Domain-Driven Design (DDD) principles give us a framework and approach for the intelligent decomposition of a large problem space (development of the data platform) into a set of smaller problems (individual data products) that are tractable using Agile development methods and “two pizza” development teams. 

#5 Fundamental to DDD is the idea of bounded context, i.e.: the definition of explicit interrelationships between domains. Because “data love data” and frequently need to be combined across functional and domain boundaries, lightweight governance and data management processes that ensure that these interrelationships are “designed-in” to individual data products is critical. Understanding, defining and enforcing the minimum set of Primary Key / Foreign Key relationships required to reliably and accurately join and compare data across different domains is vitally important in this process, as are appropriate business, technical and operational meta-data that enable data and data products to be discovered and re-used.

#6 It will often be appropriate to create enterprise domains to support the realization of cross-functional data products – and where interoperability has been designed in to underlying data products, these cross-functional data products can be built better, cheaper and faster.

#7 “Lightweight” is a crucial qualifier. Over-engineering and over-modelling can slow the development of data products to a crawl.  Especially when it is unclear which data will be frequently shared-and-compared – as it often is when developing MVP data products -  “light integration” approaches like Teradata’s LIMA framework should often be preferred.  “Bi-modal” analytics and “Data Labs” also have a role to play here.

#8 Technical debt is a major drag on digital transformation initiatives. Re-use of data products is critical to the reduction of technical debt. Most data has very little value until it has gone through a process of cleansing and refinement. Wherever possible and practical we should do this once, rather than constructing “pipeline jungles” of redundant, overlapping data transformation processes to apply essentially the same transformations to essentially the same data over-and-over. Very many organisations are moving towards the use of Feature Stores to support their Machine Learning initiatives for precisely this reason.

#9 Some commentators would have us believe that the most important part of the Data Mesh concept is the ability to rapidly provision containerised infrastructure. Bluntly, it isn’t.  Provisioning infrastructure was never the “long pole in the tent”, even before Cloud deployment models made it even simpler and even quicker. The long pole in the tent is cleansing and semantically aligning data so that they can be reliably shared-and-compared.  See this cautionary tale of wrangling fairly basic COVID metrics for one recent example of how complex this can be even within the context of a single domain (and just how negative the consequences can be).

#10 Federating the development of complex data products does not automatically imply the federation of their deployment. In fact, a spectrum of deployment options is available to organisations deploying Data Mesh solutions. Because these different strategies are associated with fundamentally different engineering trade-offs it is important that organisations frame these choices correctly and are intentional about their decisions. In general terms, there are three different strategies for data mesh deployment: (1) schema co-location, (2) schema connection, and (3) schema isolation. Note that these choices are not mutually exclusive and that most real-world implementations will continue to use a combination of these approaches. 

#11 Even at the low end, the data platforms in Global 3,000 organisations typically support 50+ analytic applications and run over a billion queries per year - with up to two orders of magnitude increases in query volumes likely during the next decade. Very many enterprise analytic workloads are characterised by: complex, stateful processing; repeated execution against continuously changing data; and embedded deployment in mission-critical business processes. In addition, improvements in the performance of multi-core CPUs continue to outpace improvements in the performance of network and storage sub-systems. For all of these reasons, the schema co-location and schema connection strategies continue to offer important performance, scalability and TCO advantages in very many scenarios. Note that schema connection strategies assume the use of a high-performance and scalable data fabric, like Teradata’s QueryGrid technology.

#12 We are enthusiastic about the Data Mesh concept because it places intelligent decomposition front-and-centre in the rapid development of data platforms and complex data products. Our recommended approach to implementation of Data Mesh based architectures is to create separate schemas for each domain. Responsibility for data stewardship, data modelling, and population of the schema content is owned by experts with business knowledge about the specific domain under construction. This approach removes many of the bottlenecks associated with attempting to implement a single, centralized consolidation of all enterprise data into a single schema. The domain-oriented (and semantically linked, where appropriate) schemas provide a collection of data products aligned to areas of business focus within the enterprise.

#13 Most large enterprises already operate across multiple geographies - and are increasingly leveraging multiple Cloud Service Providers (CSPs). That makes the Connected Data Warehouse fundamental to at-scale Data Mesh implementation. Within a CSP and within a geography, co-location of multiple schemas aligned to specific business domains within a single, scalable database instance gives the best of two worlds: agility in implementation and high-performance in execution.

More on this topic in a fully-fledged white paper soon.
Tags

Über Martin Willcox

Martin leads Teradata’s EMEA technology pre-sales function and organisation and is jointly responsible for driving sales and consumption of Teradata solutions and services throughout Europe, the Middle East and Africa. Prior to taking up his current appointment, Martin ran Teradata’s Global Data Foundation practice and led efforts to modernise Teradata’s delivery methodology and associated tool-sets. In this position, Martin also led Teradata’s International Practices organisation and was charged with supporting the delivery of the full suite of consulting engagements delivered by Teradata Consulting – from Data Integration and Management to Data Science, via Business Intelligence, Cognitive Design and Software Development.

Martin was formerly responsible for leading Teradata’s Big Data Centre of Excellence – a team of data scientists, technologists and architecture consultants charged with supporting Field teams in enabling Teradata customers to realise value from their Analytic data assets. In this role Martin was also responsible for articulating to prospective customers, analysts and media organisations outside of the Americas Teradata’s Big Data strategy. During his tenure in this position, Martin was listed in dataIQ’s “Big Data 100” as one of the most influential people in UK data- driven business in 2016. His Strata (UK) 2016 keynote can be found at: www.oreilly.com/ideas/the-internet-of-things-its-the-sensor-data-stupid; a selection of his Teradata Voice Forbes blogs can be found online here; and more recently, Martin co-authored a series of blogs on Data Science and Machine Learning – see, for example, Discovery, Truth and Utility: Defining ‘Data Science’.

Martin holds a BSc (Hons) in Physics & Astronomy from the University of Sheffield and a Postgraduate Certificate in Computing for Commerce and Industry from the Open University. He is married with three children and is a solo glider pilot, supporter of Sheffield Wednesday Football Club, very amateur photographer – and an even more amateur guitarist.

Zeige alle Beiträge von Martin Willcox

Bleiben Sie auf dem Laufenden

Abonnieren Sie den Blog von Teradata, um wöchentliche Einblicke zu erhalten



Ich erkläre mich damit einverstanden, dass mir die Teradata Corporation als Anbieter dieser Website gelegentlich Marketingkommunikations-E-Mails mit Informationen über Produkte, Data Analytics und Einladungen zu Events und Webinaren zusendet. Ich nehme zur Kenntnis, dass ich mein Einverständnis jederzeit widerrufen kann, indem ich auf den Link zum Abbestellen klicke, der sich am Ende jeder von mir erhaltenen E-Mail befindet.

Der Schutz Ihrer Daten ist uns wichtig. Ihre persönlichen Daten werden im Einklang mit der globalen Teradata Datenschutzrichtlinie verarbeitet.