Moore’s Law of Robotics

I have no idea whether truly intelligence robots will ever exist, but I can definitely imagine that their actions will start to seem intelligent within the next twenty years. Being intelligent and seeming intelligent are close enough, that I am not sure the difference matters. Being intelligent implies having the ability to create new solutions and ideas for situations never previously encountered. Seeming intelligent is to apply existing solutions and ideas to new situations. Current robots and machines are far from seeming intelligent and even farther from being intelligent. However, as their accumulated set of capabilities increases, this will change. Over the next couple of articles, a sketch of the idea that as the number of capabilities a robot possesses increases, the overall flexibility of its actions will increase as well. Flexibility of action is the ability to respond appropriately to situations which have never been encountered before. As an individual robot’s capabilities reach into the millions, and then billions, there will be fewer and fewer situations where it will be unable to complete its task. In such an environment, robots will be very hard to distinguish from seeming intelligent to actually being intelligent. The meaning of capabilities will be rather broad, but the notion should be clear by the time we finish.

As we proceed, we will also establish that the robotics equivalent of Moore’s Law should be that over a period of time, the number of capabilities a robot possesses doubles.


Exponential growth in the number of transistors

Moore’s law describes the doubling of transistors over a period of time. This period is often described as being somewhere between a year or two. We have been experiencing this phenomenon in our computer processors for the last several decades. The common method of adding more transistors is by shrinking their size. Unfortunately, this approach will likely hit physical limits within a decade. However, many alternate technologies are being examined to maintain the doubling trend. In the past, the doubling of transistors was also associated with a doubling of speed. However, the speed doubling ended a few years ago, and will only improve in fits and starts. While we may see up to a thousand fold increase in transistor speeds sometime in the future, it is unlikely that switching speeds will reach the petahertz (1 followed by 15 zeros) range.

As the number of transistors increases, the number of computations performed simultaneously also increases. This means that over time, a single “processor” has more transistors to calculate math, make decisions, model physical reality, process images, process sound, and so on. Future doubling will lead to processors that are larger in size. Or more likely they will start to grow, new tricks are discovered, they shrink again, grow again, shrink again, but always striving to double the number of transistors. From a user’s point of view whether the processor’s chip is a square or a cube does not matter. Also, whether there is one chip or hundreds of chips networked together is not of much concern. What matters is that computers and robots will be able to ride this trend of increased processing power for the foreseeable future. Besides, computational power is an enabling technology, not an end in itself.

One way that transistors are doubling is to increase the overall number of processing cores available. The more cores, the more tasks which can be performed simultaneously.

Exponential growth in the number of sensors

One metric that has been proposed as the Moore’s law for robotics is that over a period of time the number of sensors doubles. For the short term, growth in the robot capabilities will be tied with growth in sensor count. Greater number of sensors, greater sensor sensitivity, and greater range of senses (tough, sight, sound, chemical, etc.) will all benefit robotics. Eventually the data from all those sensors will reach sheer information overload, and more than a few thousand or million sensors will be overkill. At some point in time, the set of sensors a robot contains will lead to differentiation in capabilities between different robots, and this seems better than trying to cram all possible sensors into a single robot. One area where this rapid increase in sensor count is observed today is in commercial automobiles. The recent Urban Grand Challenge demonstrated the benefits of additional sensors, with the autonomous cars successfully driving in a town-like environment with traffic. This was a substantial improvement in self-driving capabilities compared to the two earlier challenges held out in the desert.

Examples of what robots will need to sense: chemicals, acceleration, velocity, current, voltage, altitude, direction, location, air pressure, contact pressure (i.e touch), light, sound, infrared, radio frequencies, temperature, torque, weight, and so much more. For convenience, the transmission of the items being sensed should also be considered a sensor. That is to sense radio frequencies (or radio transmissions), then a complementary sensor would be a sender of radio frequencies (radio transmitter). For a light sensor, one would take this as one sensor for processing light, and another for emitting light. In this way, robots will naturally develop many different forms of communication. Communication can be carried out not only by touch, sight and sound, but also by chemical, electrical, magnetic, and other means.

The nice thing about sensors as a metric is they are easy to count. Since there are so many different types of sensors and things to sense, it is easy to imagine the need for a large number of sensors. Unlike transistors, the complexity of a particular sensor varies widely from that of another making it hard to compare the benefit of one sensor type over another. However, simplifying things to just consider overall count makes sense, particularly as the number of sensors becomes large, each individual sensor becomes less important . However, the synergy from the variety of sensors becomes increasingly important.

As the number of sensors increases, the bandwidth of the networks feeding the sensor data to the processors will have to increase as well. Exponential growth in the number of sensors will also lead to exponential growth in network bandwidth.

Exponential growth in number of robot capabilities

Where robotics will benefit the most is for the number of capabilities they can perform to increase exponentially. Just as robots will benefit from greater transistor counts, increased sensors and their synergy, they will also benefit from increased capabilities and synergy of those capabilities. Being able to balance on two feet, combined with the ability to move each foot forward, leads to walking. The ability to balance is a complicated capability made up of many sub-capabilities, just as the ability to move the foot forward requires many sub-capabilities to make it happen. Similarly being able to move an arm, and open and close a hand, can be combined to allow for catching a ball. Unlike counting the number of sensors which are discrete discernible entities, trying to count capabilities is a bit harder. For the capability of walking, is this a single capability, or is it the individual algorithms that lead to balance and movement which should be counted? In order to consider the exponential growth, it really should be the underlying algorithms that lead to balance and movement that should be individually counted. These are the components that will be incrementally improving over time, leading to more and more robust skills. Continuing with the walking example, the improvements in balance and overall walking skills have been steadily improving as demonstrated by robots such as Asimo from Honda and more recently BigDog from Boston Dynamics.

While particular capabilities will be unique to the design of particular robots, others will be more general and can be transferred from robot to robot. Both will see a rapid increase over the next few decades. Capabilities that are initially robot specific, will over time become more generic.

Countries like Japan have already started robot initiatives with the aim of placing robots in the homes of the elderly to help reduce the health care costs as the nation becomes increasingly more grey haired. Similarly, many car manufacturers are looking to increase the degree of automation available in the car with adaptive cruise control, self-driving, self-parking, blind spot warnings, entertainment systems, etc. With just these two initiatives there is substantial amounts of money being made available to research and implement. This will lead to an even wider set of behaviors/capabilities for robots.

Up to now robots have been very limited in what they can do independently, but already this is starting to change quickly. While the autonomous cars from Darpa’s Urban Grand Challenge are not yet ready to hit the road on their own, the existing technology will likely creep out into the market with features such as emergency collision avoidance. While we are not yet ready to let them take to the roads on their own, there shouldn’t be much resistance to cars which have determined a collision is inevitable from stepping in and taking control to reduce the forces that will be experienced by the passengers. Over time more and more capabilities will be introduced, and confidence in those capabilities will eventually reach a point where drivers are willing to place their lives into the hands of their self-driving cars. Like any engineering project, there will be a lot of learning along the way, but with each success and failure, the capabilities will become increasingly more robust, and able to handle a much wider range of situations. As situations are encountered which are not properly handled, engineers will analyze the failures, and new capabilities will be added, and existing ones improved.

Implementation specific and generic capabilities

Over time interfaces for common tasks such as navigation will be standardized so that they can be added to many different types of robots who then convert the navigation into specific commands sent to legs, wheels, treads, wings, etc. This will hold true for other common operations such as walking, running, hand-eye coordination, object recognition, object handling, etc. At a high-level these generic operations will need to be handled in a way specific to the robot’s architecture. A two legged robot will walk very differently from an eight legged one, or even a single legged robot. Capabilities will arise to address these needs, and over time become standardized and incrementally improved. The standardization will allow for rapid transfer of capabilities from one robot to another, and also allow the improvements to be spread faster.

The library of available capabilities will increase in an exponential manner such as transistors in a computer processor, hard disk storage, memory capacity, and digital camera resolution. This increase in capability count will be followed by even newer capabilities which utilize existing capabilities in a new way to accomplish a new task not previously performable by a robot. As new capabilities mature, they will become easier and easier to transfer to more and more robot variants, since as the capability matures it will become more standardized. A concrete example of this would be traffic light recognition. Once one car manufacturer introduces the ability for a car to determine whether the traffic light is red, yellow, green, turn left, turn right, warning, etc., it will not be long before other manufacturers rush to introduce the capability as well. Over time the quality of traffic signal recognition will improve, and more and more cars will move from having this as a purchase option, to it becoming a standard option lilke air bags. Then it will become a commodity capability, that will be standardized, continually improved, and made cheaper and cheaper to implement. After a while, no one will think it very special that cars are able to appropriately handle a wide range of varied traffic lights, and it will be just one among millions of capabilities inherent in the car.

Slow and steady

Today robots operate in very well defined niches, under relatively controlled conditions. This is mostly due to each and every robot available today having only a small number of specialized capabilities. But over time, the exponential growth in capabilities will lead to a slow and steady improvement and broadening of what robots can do. We are very early on in the exponential growth of robot capabilities, so progress seems relatively slow, but over time the nature of exponential growth will show more and more capabilities released and improved each year. Eventually the rate of improvement will be so high, that it will be hard to tell whether a robot is really intelligent, or just that it has such a large set of accumulated and integrated capabilities built in that its database of situations is large enough to cover just about every situation the robot is likely to encounter. And in cases where the robot is not able to handle a new situation, the failure will be reported, and existing capabilities will be further improved, and new capabilities added to convert the formerly new situation into a been there done that situation. As the exponential growth in capabilities continues there will be fewer and fewer new situations encountered.

Of course, once future robots are able to handle the majority of the situations they will encounter, we’ll change the rules and start trying to apply robots into a broader and broader range of uses. Each time this happens, the initial robots will seem barely competent, and a bit clumsy at their new tasks, but that will change quickly. As the set of new tasks increases, so will the set of implemented capabilities. When more and more capabilities become standardized and integrated with other capabilities, fewer and fewer tasks will seem unknown to the robots being tasked with performing them.

Moving forward

Over the next couple of articles we will look at this notion of exponential growth of capabilities as the Moore’s law of robotics. What is your view of capabilities as a metric?

Related Posts

Stephen’s Law of Robotics

Be Sociable, Share!
Explore posts in the same categories: Almost there

2 Comments on “Moore’s Law of Robotics”

  1. Barnaby Dawson Says:

    Coming at the question from a biological perspective (although not being a biologist) I have several things to say:

    1) The crucial aspect of intelligence is plasticity. The human brain is capable of using many different parts for the same task. Visual data can be given to a person through their back (on pressure pads) or even their tongue and their brain can learn to treat it just like data sent in through the optic nerve. In some animals (and possibly humans too) the optic nerve can be rerouted to an entirely different area of the brain (during embryology) and the animals brain grows with a completely different structure but still able to function normally. In addition people routinely use mental modules they developed for one task to perform an entirely different one (e.g. counting in ones head to do simple sums).

    In my opinion with AI we should create an AI brain with the same or similar degree of plasticity and then teach it these tasks. Any other route will take much longer because their are only so many available programmers.

    2) The algorithm behind the brain’s plasticity and its learning capacity must be pretty simple for several reasons. Firstly there are not many genes responsible for the nervous system in people and most of these probably code for essentially irrelevant internal aspects of neurons. This leaves very little space for a complex algorithm (the emergent behaviour is certainly complex, however). This means that it should not be beyond the 21st century wit of man to reverse engineer this basic algorithm or to create an equivalent algorithm.

    If we can create a suitably flexible AI brain then I agree that we should expect exponential growth in its capacities (whilst Moore’s law itself holds). However, without that I am much less confident in this as it seems to me that the speed with which capacities could be added is limited by programmer time.

  2. Stephen Says:

    Barnaby:

    There is no doubt that today’s programming paradigms lack plasticity. If one part of a system fails, it generally brings the entire system down. While there has been work to develop fault tolerance, and “self-healing” there have not been any noticeable large scale successes.

    Developing millions or billions of capabilities will be a huge undertaking. Not all of the capabilities will be fully fleshed out by programmers, since some are likely going to involve neural networks such as those proposed by Jeff Hawkins in On Intelligence, or genetic algorithms, or some other learning model which may or may not be based on biology.

    Unfortunately, at this point it is unclear how many discrete capabilities are required to handle balancing, walking, catching a ball, crossing the street, and so on. My current guesstimate would be that this would be in the billions, and therefore the equivalent of trillions of lions of code would be required. This is definitely a huge undertaking. However, how much of this code is hand written, how much is generated automatically, and how much is data or biological driven remains to be unseen. As engineers and researchers try to figure out how to develop robots which can act (and maybe even be) intelligent, there is going to be a lot of design space investigated.

    Even if robots are developed via models of the brain, the set of capabilities that robot can perform should be a good measure. Of course, if the capabilities develop via learning, as opposed to being added incrementally by engineers, counting them will be a bit more challenging. So a good question based on this metric would be how many capabilities does a human have today? What are the capabilities of an ant, spider, etc. Knowing capabilities, would also help to emulate.

    In an upcoming article, I do plan to take a stab at examining the challenges that would be involved in trying to develop a robot in a community driven manner. As you point out, the sheer manpower required means that no particular group would be able to develop everything. Development will occur over large periods of time, with new additions building up, until the underlying architecture gets overwhelmed, and then the architecture gets revamped, and progress continues, and so on. Development will require substantial automation.

Comment: