Silicon Valley Innovation Center
We help global corporations grow by empowering them with new technologies, top experts and best startups
Get In Touch
Our Location

Innovation and Tech Resources Hub

Access the latest trends, best practices, educational materials, and support services designed to drive technological advancement and innovative thinking in your organisation.

Supporting humans’ efforts at data processing is one of the promises of artificial intelligence (AI) and machine learning (ML), but the technology has its share of challenges, according to a recent SVIC-organized webinar.

Many AI / ML algorithms perform well when solving easy problems, for example, but they are not as successful with harder, real-life challenges. Backgrounds of images, how people label graphics and variations in angles can lead to incorrect identifications. To train a computer to understand nuanced differences would mean that humans would have to know all possible variables and program the machine to see them. As a result, many models based on regressive data result in “statistically impressive, but individually unreliable” outcomes, said presenter Dr. Pat Biltgen, one of the webinar presenters.

When embarking on an AI/ML program, Pat advocates having clarity on what kind of problem the organization is trying to solve so that appropriate training data can be selected. 

Automaker Tesla addresses the need for safe, efficient transportation, so it gathers data from drivers continuously. The car’s computer learns and improves from multiple inputs on an ongoing basis, such as the roadway lines that are imaged from cameras that tell the car to stay between them or when to cross. This builds the trust in those systems among humans, which will make the technology more prevalent and successful.

Tesla follows the dominant machine learning model of AI in which algorithms are first exposed to large amounts of training data on which decisions are based. These training data sets are where challenges begin because they can over-represent certain cases while under-representing others. For example, an image of a tree may get classified as a giraffe, not just because it’s tall but because the rarity of giraffe sightings means they are photographed more often and those pictures are frequently shared.

“Giraffes are very overrepresented in image data sets,” said Pat Biltgen, Perspecta’s director of analytics and data science at Perspecta. “If you train your algorithm on publicly available imagery, you’ll get a lot of false alarms that are related to this phenomenon.”

Training data can be flawed in other ways, too. When there is a large number of common events and few rare events, the algorithm ones sees the rare things a couple of times, adds Pat. This can be a problem if those rare events are AI is supposed to detect. “In industries such as healthcare, defense, intelligence and criminal justice, it is often the anomalies that are of interest.”

Another AI/ML issue today is how its performance suffers even when the training data has errors or is slightly off. Biltgen’s team investigated such a case by purposely mis-labelling 2% of a dataset to see how an algorithm would react. The experiment was an attempt to replicate real-world circumstances, where data is labelled by humans who are prone to errors, particularly when carrying out tedious tasks over long periods of time. Biltgen’s team found that the inclusion of mislabeled data increased the algorithm’s error rate by much more than 2%, even on a relatively simple task like identifying images of airplanes.

“The algorithm was getting confused very quickly by the mis-labelled images, so this doesn’t give us a lot of confidence that very large human-labelled datasets are going to be accurate to produce highly accurate models in very serious conditions.”

The fleet learning model that Tesla follows is one solution that teaches the car to stay in its lane because that’s what the human behind the wheel is doing. This takes away the need to design a training data set that includes rare events on the road.

Biltgen encourages companies to have a healthy skepticism about the technology. There are hard problems that AI/ML cannot yet solve, just as there trivial problems it can solve. The trick for enterprise is to explore the space in between those two poles, experimenting and adapting in order to gradually build up trust in algorithms, so that they might one day be relied upon to tackle a company’s toughest, most mission-critical problems.

Search

Most Popular

Exclusively via mail

Learning to Innovate -
Intelligence Series

We specialize in delivering to you the unique knowledge and innovation insights of Silicon Valley!

Technology advances as you read -
Learn to Innovate today!

Let us help you