Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

Opening the Window on AI Transparency

June 18, 2020 No Comments

Featured article by Charles Simon, Independent Technology Author

AI transparency is one of today’s catch phrases, but what does it really mean? What problem does it address? Will it really make a difference?

With the explosion of AI, we’ve seen a corresponding explosion of AI applications, many of which don’t work all that well. The problem is that while AI represents a powerful, useful technology, it is a tool not magic. In the hands of careful professionals, it is extremely useful. But even then, the results are subject to misinterpretation.

Artificial Intelligence, in fact, is not “intelligent” and Deep Learning has no “learning” in the common meanings of the words. In the human world, intelligence implies reason and learning implies understanding, neither of which are present in a typical AI system. Our approach to AI would be better served if it were renamed “Artificial Association” or “Deep Correlation.” This would indicate that these are powerful statistical methods. Consider that facial recognition, for example, simply associates certain images with others, while a stock-market prediction using AI correlates present market conditions with likely future performance. As such, AI applications are subject to the same types of misuse as other statistical methods.

Even when an AI application is working well, it is usually a “black box.” We can see the inputs and the outputs, but don’t really know what’s going on inside. One might think that adding transparency to an AI application would involve tracing into the internals of its decision process but it’s not that simple. Instead, given an input sample and its associated output, small areas of the input sample are modified to see which modifications cause the output to change.

An input image of a husky dog (the input), for example, might be misclassified as a wolf (the output) because there is snow in the image (the modification). From this, we can conclude that the sample set used to train the AI included images of wolves which disproportionately also included snow.

We can glean several insights from this example. Obviously, any trained AI is at the mercy of the tagged sample set used to train it. Even if the tagging is perfect, the selection of the sample set has a direct impact on the resultant AI. This example shows how the AI’s lack of intelligence means it is unable to recognize that we were interested in the “subject” of the picture (the dog or wolf). This AI doesn’t have the concept that an image may have a subject.

Central to the problem is that we have set high expectations for our AI applications. When they are working properly, we assume that it is because of abilities which may or may not be present.

The two-fold issue of AI transparency is that it has to be used and it has to be understood. One can’t check every input sample and every AI decision because it defeats the purpose of having AI. So we spot-check some samples which are correctly identified and some errors, and hope to gain a better understanding of how the AI is working and how to correct it if it is not.

The final issue is that what we choose to do with the results of the AI can be more important than the AI’s result. AI transparency can explain the “why” of its own operation to verify that the result is correct, but it cannot explain the “why” that such a result might exist or what is needed to make a useful decision.

Using the example of facial recognition, AI transparency can help a facial recognition application be improved so that people are recognized more accurately. But what we do with that information is key to the success of AI. Consider the following simplistic example of a facial recognition program used to pick criminals from images of a crowd:

Chart_2

The focus of the facial recognition algorithm is on maximizing boxes 1 and 4 where the recognition is correct. While the accuracy may be 99.97% under ideal conditions, this may drop to only 90% with images “in the wild.” This means that boxes 2 (false positive) and 3 (false negative) can represent a significant portion of the results. The AI can often be adjusted between boxes 2 and 3 at the discretion of the developers. AI transparency can help lower the 10% overall error rate, but the allocation of false positives and false negatives may be a separate issue. Further, what we consider to be an acceptable error rate is outside the scope of AI altogether.

Bottom line: AI transparency should be considered a powerful tool, but that doesn’t make AI perfect.

About the Author

Charles Simon, BSEE, MSCs is a nationally recognized entrepreneur and software developer who has many years of computer experience in industry including pioneering work in AI.  Mr. Simon’s technical experience includes the creation of two unique Artificial Intelligence systems along with software for successful neurological test equipment. Combining AI development with biomedical nerve signal testing gives him the singular insight. He is also the author of Will the Computers Revolt?: Preparing for the Future of Artificial Intelligence, and the developer of Brain Simulator II, an AGI research software platform that combines a neural network model with the ability to write code for any neuron cluster to easily mix neural and symbolic AI code. More information on Charles Mr. Simon can be found at:  https://futureai.guru/Founder.aspx

Sorry, the comment form is closed at this time.

ADVERTISEMENT

Gartner

WomeninTech