So what if Moore’s Law ends?

Look at your mobile device

Have you noticed that this year’s smart phones that have been announced or released have not been that much better than than last year’s?

This post is a follow-up to my last post, about the end of Moore’s Law.

Samsung Galaxy Note 5

The Samsung Galaxy Note 5 in many ways was just an incremental step forward over the Note 4. In many ways, it heralds how Moore’s Law has ended in that incremental improvements of phones between generations has declined.
Image Source:Samsung

The Samsung Note 5 that you see pictured above you is like all other smart phones. This year, you’ll notice something when all of the new mobile devices are being announced. They are not that much better than previous year’s devices!  In fact, you probably don’t see the need to upgrade as much as before.

Why is that? Why has the rate of improvement slowed or in some cases, perhaps even halted altogether. What drives the improvement was Moore’s Law (really it should be called Moore’s Observation). The idea that we could double the density of an integrated circuit every 24 months. The big challenges that I see are that without a greater transistor budget, engineers are left purely with architectural improvements.  Those have historically scaled much slower than any transistor budget increases, particularly now with Dennard scaling having been gone since 2005.

Where else could the phone get better?

There are other areas to improve of course. This article is by no means intended to be a comprehensive list of everything that could improve. The SOC on your phone, which controls the CPU, GPU, and a few other functions is not the only thing that can improve.

Have you ever had bad signal reception on your phone? Of course you have. One area that could improve with Moore’s Law is the radio. This is true of both the receiver and the transmitter, allowing for better coverage, faster data speeds, and perhaps more battery life.

Your phone’s storage could also get better. Perhaps with the introduction of 3D NAND and NVMe, it will be able to store more and access the files that it does store faster. That means you can have a larger FLAC music or large video collection. You will especially need that extra room as screen resolutions go up and the need for bigger files goes with it to take advantage of that higher resolution. Outside of that, you can store more of your essential files on your phone.

Samsung has made some pretty decent incremental improvements in the AMOLED screen. AMOLED has come a long way and we do seem to be at the point now, where the pixel density of AMOLED technology makes the “pen tile” type designs a non-issue. Dr. Raymond M. Soneira has covered this extensively and he sees considerable improvements in the Note 5’s screen. It is 21% more power efficient for example, compared to the Note 4. I could see there being the possibility of other improvements elsewhere. AMOLED technology needs to continue to improve in terms of making the blue subpixel last longer (it current degrades faster than red and green subpixels), and improving performance more aggressively in terms of performance under ambient life. Finally of course, there is also room for further improvement in the power consumption of the phone. That brings us of course to battery life.

Probably the biggest complaint that most people have is their batteries. I speak as a user of Extended Batteries. I absolutely hate having to charge my battery all of the time. Yet battery technology is an example of something that has not scaled like semiconductors. Most things in life do not scale like semiconductors. Semiconductors have scaled faster than pretty much anything else in human history. Battery technology, alas is no different. In fact, battery technology has not changed all that much since 2000, with lithium ion batteries remaining the norm. There are a few potentially promising technologies on the market, but for now, we unfortunately are not going to see major improvements in battery life. For now, those technologies remain in the “maybe someday” category.

Marginal returns even if we could get it faster

From CES 2016

SSDs have rapidly proliferated over the past few years, offering for consumers considerable advantages. Yet despite the potential for improvement, it will not be as big as before.
Image source: Zotac

If you have ever installed a Solid State Disk (SSD) on your computer, you will know of course that the advantages. They have faster boot times, your applications load faster, and they are less prone to fragmentation (thereby slowing down with time).

There have been several exciting announcements in the past year or so that we have seen. First, we have the rapid proliferation of NVMe SSDs over the older AHCI protocol, which should improve sequential write times considerably. We are starting to see more NVMe SSDs get released and I expect to do so over 2016. Second is that the NAND that actually stores your data has been improved with 3D NAND, which can improve the endurance of SSDs, their performance, and the capacity they hold. Third, we will not even stick with NAND, with technologies like 3D X Point, a type of  ReRAM eventually replacing NAND, by offering superior performance and durability.

Even so, the advantages are limited. Even if we could make a permanent non-volatile storage that had the performance and durability of say, DRAM, the benefits would be marginal. The reason is because when we transitioned from hard drives (which operate in milliseconds) to NAND (which operates in microseconds), the gains were quite noticeable to a human eye. Even the 15,000 rpm hard drives could not keep up, because SSDs have no physical tracks or sectors, so as a result, no physical seek limits, leading to considerable gains. When you upgrade to an SSD, it will feel faster. By contrast, the gains become much smaller from NAND to say DRAM-like performance.

Your eyes cannot notice it. That has implications – for one, the price premium that any new technologies will command may be more limited. ReRAM claims that will be cheaper over time, but that remains to be seen at this point. I remain cautiously optimistic. Time will tell.

Of course, elsewhere, such as in enterprise, the massive sequential performance and superior endurance will matter a lot more and these technologies will sell.

Electronics like cars and aircraft

Have you looked at this year’s models of new cars? Much like phones of course, the automotive industry releases new cars every year.

If cars had improved at the pace of semiconductors, they would be driving at several times the speed of sound, be so cheap that they would be disposable, and be hyper-efficient. That is what decades of exponential scaling would have done.

Cars of course are not like that at all, because the improvements are incremental. The rate of change is much slower. Sure, this year’s model may be slightly more fuel efficient, may be somewhat safer, and so on, but it is not a huge improvement. That is where all electronics are going.  To be sure, there will continue to be improvements, but they will be more of the one time variety. We will see improvements in the single digits, rather than rapidly like before.

A few examples:

  • Moving the memory controller from a CPU from a motherboard onto a die can give massive improvements, but only once
  • High Bandwidth memory, once it becomes cheap and mainstream will give a one time yield, then the rate of improvement will slow down to DRAM like speeds

There is also what happens too at the fabrication stage – a lot of it will yield one time gains mostly.

Musée de La Poste

The Concorde, the famous supersonic aircraft.
Image source: Musée de La Poste

Another analogy might be aircraft. Not only are improvements incremental rather than exponential, but I have wondered whether or not the giant, hot die that we are currently using may be like the Concorde. We are running giant, hot dies these days for the CPU and GPU. Might we come to the conclusion that something slower is simply more power efficient, cheaper, and more practical?

Perhaps unlike supersonic aircraft, which have disappeared entirely, large, hot dies at the reticle limit may not disappear, but they will become more and more niche. Certainly, with their 300 Watt TDPs and mass, they will be relegated to roles where only the extreme performance may be needed.

Like aircraft, there will be bottlenecks. Materials sciences will bottleneck both computers and aircraft for example, as will cooling, and progress will remain incremental.

What could change this?

The problem is that Moore’s Law is more or less dead. As I noted previously, we are left with only making better use of what transistors we have, which will scale much more slowly.  Dennard scaling is long gone, and increasingly, the future of Moore’s Law is in doubt. I think that we are bumping against the laws of physics. It is like an engine – one can only make it 100% efficient, not more than 100%. We are running up against the limits of atoms.

Unless we see major improvements in EUV or some other major breakthrough (like graphene), we are not going to be seeing the exponential gains that we had in the past. Something like graphene might lead to a jump and a few  years of performance, or it could be just another one-time gain. It’s hard to say unless we see it. That is the nature of blue skies research really.

The only way we will know is if we invest massive amounts of R&D money into it. The capital costs keep rising and that has forced many companies out of the market.

Until something big changes though, we will be in the age of incremental electronics.

I should also note that I have only scratched the surface from a consumer standpoint. From a software developer standpoint, an engineer’s standpoint, or say, someone that needs scientific computing, Moore’s Law has huge impacts that will affect all of us.

Leave a Comment

Your email address will not be published. Required fields are marked *