The computing power that once fit inside an entire room now fits into your smartphone, all made possible by cramming more transistors onto a single integrated circuit over the past 50 years.
This exponential growth resulting in periodic increases in computing power was predicted by Moore’s Law, an observation and projection by Gordon E. Moore which plotted the number of transistors in a dense integrated circuit doubles approximately every two years due to improved design and manufacturing methods.
But Moore’s Law is more than just a prediction; it’s become the goal and self-fulfilling prophecy for the semiconductor industry, resulting in the transistor fabricated on a microchip being the most frequently manufactured human artifact in history.
According to David Brock, “Estimates of the number of transistors produced in a single year now match, or exceed, estimates of the total number of all the grains of sand on all the world’s beaches. With computing devices made of microchips, the price of computing has fallen over a million-fold, while the cost of electronics has fallen a billion-fold.”
Can innovation continue at this rate? Basic spatial limits and physical laws suggest no.
With the finite velocity of light and atomic nature of materials, a limit seems inevitable so some predict the rate of progress will reach saturation. Even Moore himself foresees “Moore’s Law dying here in the next decade or so.”
Moore’s Law will have to evolve to survive, morphing into what some call “more than Moore” or Moore’s Law 3.0; can we change how we define Moore’s Law and still call it one? Why not? ” target=”_blank”>It’s changed before.
Besides, the end of Moore’s Law has been predicted several times in the past, yet any time the industry hit a wall, the walls receded and innovation surmounted.