George Anadiotis
for Big on Data
| October 27, 2021
| Topic: Enterprise Software

Cloud-native architecture and breaking down application monoliths is good, but if you do that, you also need a way to manage the fine-grained mess that ensues, argues EnterpriseWeb
By ZinetroN — Shutterstock
How do you solve the age-old data integration issue? We addressed this in one of the first articles we wrote for this column back in 2016. It was a time when key terms and trends that dominate today’s landscape, such as knowledge graphs and data fabric, were under the radar at best.
Data integration may not sound as deliciously intriguing as AI or machine learning tidbits sprinkled on vanilla apps. Still, it is the bread and butter of many, the enabler of all cool things using data, and a premium use case for concepts underpinning AI, we argued back then.
The key concepts we advocated for then have been widely recognized and adopted today in their knowledge graph and data fabric guise: federation and semantics. Back then, the concepts were not as widely adopted, and parts of the technology were less mature and recognized. Today, knowledge graphs and data fabrics are top of mind; just check the latest Gartner reports.
The reason we’re revisiting that old story is not to bask in some “told you so” self-righteousness, but to add to it. Knowledge graphs and data fabrics can, and hopefully will, eventually, address data integration issues. But what about application integration? Could graphs and ontologies help with that, too?
Data integration and application integration
The “99 data stores” narrative was based on the true story of how the proliferation of data sources spells trouble for the enterprise. But what about applications and APIs? That same story is playing out there, so why not use the same cure for that disease, too? That’s what Dave Duggal and EnterpriseWeb are looking to achieve.
Duggal, founder and CEO of EnterpriseWeb, has spent most of his career starting, growing, building, and turning around companies. What motivated him to start EnterpriseWeb was his experience of building and integrating applications, and seeing how static that left operations in the companies he was running. He said:
The way that traditional software development happens even to this day is manual code, and manual integration, primarily. You code and recode, integrate and re-integrate. And, of course, that does not scale for today’s demands.
At one point, everything was on a mainframe — a big centralized monolith, but also very powerful. One of the reasons that mainframes are very powerful is that on the mainframe, data and code live together. There wasn’t this false divide between the data team and the application team.
Now we’re distributed. We have a whole host of new capabilities. But we also have a whole host of challenges. Because when we disaggregated from the mainframe and then monolithic applications, which were these tightly coupled balls of code to more service-based applications, to microservices and now serverless functions, we disaggregated without having a programming model for composition and management.
In other words, we took everything apart. Humpty Dumpty broke. All the pieces were on the floor. We failed to actually introduce a mechanism, a means or a method for composing those things back together and look at where we are today.
The core of Duggal’s thesis, and EnterpriseWeb’s offering, is that the same tools that can address data integration should also be able to address application integration: graphs and ontologies.
The case of SAP
Related Topics:
Big Data Analytics
Cloud
Innovation
Tech and Work
Collaboration
Developer
George Anadiotis
for Big on Data
| October 27, 2021
| Topic: Enterprise Software