The Shrink Shock

20 years ago the chip industry was facing an unprecedented setback – for the first time in its history a shrink was not delivering improvements in density, power, speed and cost. Density, speed and cost  benefits were still do-able but power was not.

“The industry is facing a challenge which is unprecedented in its history,” said Dr Tsugio Makimoto, corporate advisor at Sony, in 2004, “the problems are the increase in leakage current, the increase in wiring capacitance, a decrease in electron mobility, and parasitic capacitance in the diffusion layer. They mean that finer geometries don’t necessarily mean lower power.”

The shrink in question was the transition from 130nm to 90nm.


“Moving from 0.13µm to 90nm delivers,” said Altera CEO John Daane, “a 50 per cent increase in performance, a 2.25 times increase in density, pricing which will be half the cost per logic element than at 0.13µm, and power consumption which is higher because of the leakage current.”


That is not to say that 90nm was not delivering benefit. As Wim Roelandts, CEO of Xilinx, pointed out: “Going from 130nm to 90nm gives a 2.1 times increase in die per wafer. Going from eight inch to 12 inch gives us a 2.4 times to 2.6 times increase in the number of die per wafer at an increase in manufacturing cost of 1.8 times. If you combine 90nm and 12 inch you get five times the number of die per wafer for only a 1.8 times increase in cost per die. That’s why we’re pushing so hard for 90nm.”

While the process technology R&D people talked confidently of 65nm and 45nm, the word from the coal face wss that although you may shrink feature sizes, you may derive no performance benefit from doing so.

With process no longer the trusty, perennial provider of improved performance, many companies began  looking to architectural and algorithmic innovation for progress


Comments

2 comments

  1. SecretEuroPatentAgentMan

    Well, did we really get many algorithmic improvements? Those are powerful but also very hard.

    • Not so much algorithmic but lots of architectural improvements.

      One big improvement was in SRAM cells which meant we didn’t have to add so much error correction circuitry, nor give them their own power supply.
      Also on 300mm wafers we could do more metal layers which improved clock distribution.
      Cell design also improved a lot, the beginning of handling variability which at that time was a black art.

      The thing was we thought we had solved the problems at 90nm, then at 65nm things were even worse than expected so the whole thing started again.

Leave a Reply

Your email address will not be published. Required fields are marked *

*