Cracking the Code on Self-Checkout

How computer vision compares apples to apples
Photograph: Shutterstock

In theory, self-checkout is a retail efficiency play presented as a fast and easy customer convenience. In practice, it’s too often neither one nor the other.

That’s at least the view from Ariel Shemesh, who happens to be in the business of keeping a close eye on these kinds of things.

Shemesh’s firm, known as KanduAI, has developed a technology aimed at getting to the heart of customer friction and retailer woes in self-checkout: namely, it recognizes items like produce sold by weight that cannot scan with a barcode swipe, obviating the need for customers to punch in a PLU code for every different item. This process does more than jam lines and inconvenience shoppers, Shemesh argues: It invites errors and fraud that can harm the bottom line, undermining the efficiency that self-service promises. An imperfect customer experience, in the meantime, may be suppressing adoption.

It’s no secret that computer vision and artificial intelligence are key to addressing these issues, but they too can introduce uncomfortable compromises in the form of high costs for new hardware and servers and lengthy ramp-up as those systems process and learn. The Kanduai solution differs, Shemesh explains, by providing those capabilities in an edge-computing software that works with existing checkout systems by analyzing the data they already generate, and then “semi-automatically” determining what they show.

ariel semesh

Ariel Shemesh, co-founder and CEO, Kanduai

“Most the self-checkout machines and scales already have some sort of camera,” Shemesh says. “What we have is our proprietary neural network, which analyzes those images and knows to identify what those items are. The uniqueness of the solution is that we do it without any cloud or server; the underlying technology enables us to do it with the existing hardware. So from the retailer’s perspective, the total cost of ownership is rather low. They only need to invest in the license of the software, and the return on investment is immediate.”

Shemesh described a system more reliably pragmatic than dazzlingly precise. It doesn’t presume to necessarily determine the difference between a conventional apple and its organic counterpart—although stickers and ribbons that often accompany the latter variety make that determination relatively easier than it first seems, he says. But among items like apples with multiple varieties that might not be immediately precise to a camera peering into a plastic bag, KanduAI at the very least knows it is looking at apples; then instead of asking the shopper to scroll through hundreds of items and punch in a correct PLU code, it presents a narrowed list and only asks the shopper to point at the right one.

This alone greatly reduces the incidence of costly mistakes, while improving the user experience dramatically. They both deliver payoffs for the retailer.

“We have seen that before installing our software, the error rate is around 15%--users select the wrong PLU about 15% of the time. After installing our solution, this is reduced to 3%. That is not insignificant—15% is meaningful,” he says. “The other thing that happens—and this is harder to quantify—but researchers have shown that self-checkout adoption is still lagging. And the main reason for that is because customers find it unfriendly, and produce is one of the key reasons. Almost everybody knows how to scan a barcode today, but with produce it’s question of how do I find the right item? And so anything which makes the customer experience easier when will have an impact on adoption.”

Founded in Israel, and with U.S. offices in San Jose, Calif., KanduAI came to retail through a previous company he’d founded that used computer vision in video analytics applications attracted the attention of a German food retailer that sought a solution to the very same self-checkout issues KanduAI would solve. The solution is not in use at that company—“yet,” Shemesh says—but other European chains he would not identify are, and integrations with other retail technologies, such as Zebra and Fujitsu, are getting the solution around elsewhere. He said he anticipated a significant U.S. food retailer would announce a rollout later this year.

Generally, he says, it will take about two to four weeks in a stealth “learning mode” for KanduAI to develop enough accuracy. It comes out of the box not knowing anything but after that period can recognize 90% of what it encounters, on the way to about 95% accurate. “Scientifically, its practically impossible to get to 100%,” he said.

By making recognition of nonpackaged items more analogous to the ease with which consumers scan items, Shemesh said he foresees a similar rate of adoption. “Thirty years ago, there were no barcode readers in stores, but once they became a commodity it became hard to find a store without one,” he notes. “I’m pretty sure within five to 10 years the same thing will happen with computer vision.”


More from our partners