3T Robot Intelligence
With this software robots can search for, find and recognize people and gestures to carry out tasks. Like performing inspections for the space station, or hunting for mines underwater.
For the last 20 years, TRACLabs researchers have been pioneering the science of robotics and automation.
First Steps In Early Robotics & Automation
Researchers in Artificial Intelligence have always dreamed of building the artificial person. Up until 1986, the best they could do were mobile platforms that carefully looked at the world (with cameras and sonar), analyzed what was going on, figured out what to do next, and then did it. This made robots that worked at a snail's pace, sometimes standing motionless for minutes on end.
Then in 1986, a new idea emerged. Inspired by the way animals behave and humans carry out the routine activities of ordinary life, a new band of robotic engineers started designing much higher functioning robots that could move about the world very quickly. These robots were able to sense the world and act (or react) in less than a second. Now we had robots that could travel down hallways without hitting people, track moving objects, and even open doors.
A New Generation
Uses Layered AI
However, because they only did one or two things really well, these robots could never be butlers or chauffeurs or could never search for drowning swimmers or repair satellites. So a small set of researchers began devising ways to layer intelligence over these agile, fast-moving agents. TRACLabs researchers were leaders in this effort. We layered intelligence first by teaching the robots about routines, or recipes for action. For example, to fetch a cup for Linda from a table in the conference room, a simple recipe might be to first navigate to the conference room and locate the cup, then move to the cup and pick it up, and then finally navigate to where Linda is. Since post-1986 robots could already navigate through office buildings, these robots could now perform pick-and-place tasks with ease and execute these new action recipes with the same speed and agility as a human.
While our smarter robots could now fetch a can of soup from a supermarket shelf, they still couldn't go grocery shopping. Therefore, we layered another level of intelligence on our robots -- planning and scheduling. Prior to 1986, AI-planning techniques were used on robots to attempt to imbue them with a human-level of intelligence, but the planning routines of those early robots had to plan every wheel turn and every joint motion. Now, with our layered intelligence robots, the planning routines only have to plan down to the action recipes in the robot's first layer of intelligence. In essence, our robots can now theoretically plan their whole day.
Machines That Know How To Save Lives & Keep Us Safe
Over the past 15 years, TRACLAbs has used its 3T robot intelligence software to program robots to search for, find and recognize people, to recognize a user's gestures to carry out tasks, to perform inspection tasks for the International Space Station, to hunt for mines underwater, and to form teams with humans to carry out repair and replacement tasks on earth or in space.
And there's more. Now we can layer intelligence on any computer-controlled machines, even ones that don't move or have arms, like the machines used to run a nuclear power plant or process waste water or recycle breathable air. While the reaction times are a lot slower for these "immobots", the layered intelligence principle still applies. So for the past ten years, TRACLabs has also been busy developing intelligent control systems for all of JSC's advanced life support developments, like biological water processors, water distillation systems, oxygen generation systems and CO2 recovery systems. The results of several of these efforts were used in human-related tests, including one with four humans living and working in a NASA biosphere for three months.
We have also extended our approach to intelligent robotics to include graphical and natural language interfaces. So users of TRACLabs robots can motion-to, speak-to or just point-and-click the robots into action.