Just when we think we have a handle on all the incredible ways that dogs enhance our lives and our understanding of the world, new work with dogs expands that sphere even further. Graduate student Kiana Ehsani at the University of Washington has a great collaborator named Kelp, an Alaskan Malamute, who is a key partner in her quest to create an artificial intelligence system that thinks like a dog. The long-term goal is to produce a robot that is enough like a dog to perform many of the task that dogs are trained to do for humans. Though that may seem like a faraway dream, Ehsani’s research project is edging ever closer to that possibility.
Generally, the goal of the current research was to study and emulate the dog’s response to visual information. Specifically, the scientists wanted to be able to teach a machine to learn to act and plan like a dog based on visual information, which required modeling the future actions of the dog based on the images she has seen previously. Dogs make decisions all the time based on what they see, whether it is a ball that is being tossed their way or a tree in the path that they must walk around. Vision is used for many tasks that are of interest in artificial intelligence such as facial recognition, object detection, object tracking, determining what objects be walked on, and route planning.
To develop the foundation database for the models, Ehsani and the team of scientists she leads attached a number of sensors to Kelp—on the head, torso and tail—a few hours a day to capture her movements as she went about her daily activities. A camera attached to her head recorded what was in her view during this time, part of which was spent indoors and part outside. During the several weeks of this data collection, Kelp provided them with 24,000 images that were all associated with specific movements of the body.
The next step was to feed these data into a computer system that uses statistics to improve its ability to perform various tasks without additional programming—a technique called “machine learning”. The field of machine learning is a branch of computer science that uses algorithms to make predictions based on data. The process involves looking for patterns of behavior.
Ehsani and her colleagues were able to generate a system that could predict a dog’s behavior based on visual input, although it was only accurate in the short term. The machine could predict five moves that the dog would make based on the previous five images. Specifically, the system learns and becomes better at identifying which surfaces are ones that the dog can walk on, which environment—dog park, inside, street, stadium, alley—the dog is in based on her movements, and how the dog will move in response to a variety of images.
Eventually, researchers hope to be able to make four-legged robots that would be able to function as and replace service dogs. The advantages of a robot are obvious because training service dogs is so expensive and so many dogs begin training and then don’t make the cut. On the down side, service dogs provide companionship and social facilitation—benefits that could be lost with the use of robots. Many people consider the use of robots as service dogs unrealistic, and even those with high hopes point out that such a practical application of this research is a long way away and that this initial work is in the early stages.
The scientists are pleased with the work so far but hope to continue towards even loftier goals. They conclude their paper by saying, “We hope this work paves the way towards better understanding of visual intelligence and of the other intelligent beings that inhabit out world.”