Tony Baer (dbInsight)
for Big on Data
| September 23, 2021
| Topic: Big Data Analytics
The old “garbage in, garbage out” adage has never gone out of style. The ravenous appetite for data on the part of analytics and machine learning models has elevated the urgency to get the data right. The discipline of DataOps has emerged in response to the need for business analysts and data scientists alike to have confidence in the data that populates their models and dashboards.
The stakes for getting data right are rising as data engineers, and data scientists are building countless data pipelines to populate their models. We have long worried about AI and ML model drift, but could the same be possible with data sources that degrade or go stale? Or with data pipelines where operations gradually veer off course owing to operational issues such as unexpected latency that could disrupt and throw off the reliability of data filtering or transforms.
Related Topics:
Cloud
Digital Transformation
Robotics
Internet of Things
Innovation
Enterprise Software
Tony Baer (dbInsight)
for Big on Data
| September 23, 2021
| Topic: Big Data Analytics