Written by
Joe McKendrick, Contributor
Joe McKendrick
Contributor
Joe McKendrick is an author and independent analyst who tracks the impact of information technology on management and markets. Joe is co-author, along with 16 leading industry leaders and thinkers, of the SOA Manifesto, which outlines the values and guiding principles of service orientation.
Full Bio
Posted in Service Oriented
on January 7, 2022
| Topic: Artificial Intelligence
Developers and data scientists are human, of course, but the systems they create are not — they are merely code-based reflections of the human reasoning that goes into them. Getting artificial intelligence systems to deliver unbiased results and ensure smart business decisions requires a holistic approach that involves most of the enterprise.
Eliminating bias and inaccuracies in AI takes time. “Most organizations understand that the success of AI depends on establishing trust with the end-users of these systems, which ultimately requires fair and unbiased AI algorithms,” says Peter Oggel, CTO and senior vice president of technology operations at Irdeto. However, delivering on this is much more complicated than simply acknowledging the problem exists and talking about it.”
More action is required beyond the confines of data centers or analyst sites. “Data scientists lack the training, experience, and business needs to determine which of the incompatible metrics for fairness are appropriate,” says Blackman. “Furthermore, they often lack clout to elevate their concerns to knowledgeable senior executives or relevant subject matter experts.”
It’s time to do more “to review those results not only when a product is live, but during testing and after any significant project,” says Patrick Finn, president and general manager of Americas at Blue Prism. “They must also train both technical and business-side staff on how to alleviate bias within AI, and within their human teams, to empower them to participate in improving their organization’s AI use. It’s both a top-down and bottom-up effort powered by human ingenuity: remove obvious bias so that the AI doesn’t incorporate it and, therefore, doesn’t slow down work or worsen someone’s outcomes.”
Finn adds,”Those who aren’t thinking equitably about AI aren’t using it in the right way.”
Also: NYC Health Department creates coalition to end bias and ‘race-norming’ in medical algorithms
Solving this challenge “requires more than validating AI systems against a couple of metrics,” Oggel says. “If you think about it, how does one even define the notion of fairness? Any given problem can have multiple viewpoints, each with a different definition of what is considered fair. Technically, it is possible to calculate metrics for data sets and algorithms that say something about fairness, but what should it be measured against?”
Oggel says more investment is required “into researching bias and understanding how to eliminate it from AI systems. The outcome of this research needs to be incorporated into a framework of standards, policies, guidelines and best practices that organizations can follow. Without clear answers to these and many more questions, corporate efforts for eliminating bias will struggle.”
AI bias is often “unintentional and subconscious,” he adds. “Making staff aware of the issue will go some way to addressing bias, but equally important is ensuring you have diversity in your data science and engineering teams, providing clear policies, and ensuring proper oversight.”
While opening up projects and priorities to the enterprise takes time, there are short-term measures that can be taken at the development and implementation level.
Harish Doddi, CEO of Datatron, advises asking the following questions as AI models are developed:
What were the previous versions like? What input variables are coming into the model? What are the output variables? Who has access to the model? Has there been any unauthorized access? How is the model behaving when it comes to certain metrics?
During development, ”machine learning models are bound by certain assumptions, rules and expectations” which may result in different results once put into production, Doddi explains. “This is where governance is critical.” Part of this governance is a catalog to keep track of all versions of models. “The catalog needs to be able to keep track and document the framework where the models are developed and their lineage.”
Enterprises “need to better ensure that commercial considerations don’t outweigh ethical considerations. This is not an easy balancing act,” Oggel says. “Some approaches include automatically monitoring how model behavior changes over time on a fixed set of prototypical data points. This helps in checking that models are behaving in an expected manner and adhering to some constraints around common sense and known risks of bias. In addition, regularly conducting manual checks of data examples to see how a model predictions align with what we expect or hope to achieve can help to spot emergent and unexpected issues.”
Featured
If you use Google Chrome, you need to install this now
Remote-working: These are the highest-paying tech and management jobs
Covid testing: The best at-home rapid test kits
JFrog researchers find JNDI vulnerability in H2 database consoles similar to Log4Shell
Enterprise Software
|
Digital Transformation
|
CXO
|
Internet of Things
|
Innovation
|
Smart Cities