People often compare computers with our brains, but there are important differences between them. One difference, which the wider public seldom thinks about, is that computer brains are made of silicon while our brains are made of carbon. All known life is made of carbon, which is the most versatile element in the universe. Why should computers be made of silicon, even if it is the second most versatile element in the universe? IBM now thinks that the silicon age of computers is about to end, and that we might see the first carbon computer in just six years.
It is a bold statement, tempered with caveats, made more with hope than authority. Carbon nanotube chips would be not very different from silicon chips, only much faster. However, making them in large scale still requires significant technology advances. The year 2020 is around the time when silicon is supposed to hit a roadblock, yet again.
It will be great to get nanotube chips ready by then. This is what IBM hopes for, and also a large section of the industry also hopes for. According to the International Technology Roadmap for Semiconductors, an industry organisation formed by the leading semiconductor companies in the world, silicon chips would reach a feature size of four nanometers by 2020, which is three generations from the current 22 nanometers. No one has an answer to the intense technical problems that can crop up at this size.
The industry wants to get a replacement technology by then, and carbon nanotubes seem to be its best bet at the moment. It is not a passing issue but a serious problem that can slow down the industry significantly. It could slow down global innovation too, as much of technology innovation is based on continual increase of computing power. According to the Linley Group, a technology consultancy, costs per transistor are set to rise from now onwards, after falling steadily for decades.
The next generation of chips, of 14 nanometer size, may be expensive and not so widely used for a while. Technical problems and the cost increase as they shrink further. Currents leak. Chips get too hot. They resist mass production. The industry needs new materials for chips, and many have been tried. Nanotubes have performed very well, and they have the additional advantage of being technologically similar to silicon and yet being carbon, a smaller atom that may one day let us do many wonderful things.
IBM’s research gives us hope that nanotubes will work, as the company has made nanotube transistors already. But the nanotubes are not close enough to each other; it is a difficult problem to solve because we do not have the technology to do so. What if it is not solved by 2020? One factor we often forget, due to our obsession with Moore’s Law, is that software has been advancing too, at least as rapidly as hardware has done.
Advances in software can drive computing even if hardware grinds to a halt for some time. Let us look at how brains evolved. Between two million and 100,000 years ago, the human brain rapidly increased in size in two spurts. But it has decreased in size by 10% in the last 15,000 years. Does that mean that we are less intelligent than our ancestors? Probably not.
After reaching a certain size, the brain may have learned to work more efficiently. The human brain needs a lot of energy, and it is always good to have it just at the right size. Chips can follow the same logic in reverse. As their size stands still, we can make them more efficient.