Technology Articles

17 posts

50 years of Moore’s Law

Gordon Moore pioneered the integrated circuit and co founded the chip giant Intel; in retirement he has focused on science- and technology-oriented philanthropy. But thanks to an article he published in April 1965 in Electronics magazine, he’s known most widely for Moore’s Law, the prediction that has reflected—and helped drive—steady and staggeringly fast progress in computing technology. In preparation for the 50th anniversary of Moore’s prediction, IEEE Spectrum Associate Editor Rachel Courtland visited the man himself at his home on Hawaii’s Big Island.
graphic link for Moore’s Law special report

Rachel Courtland: It’s been 50 years since the article came out.

Gordon Moore: It’s hard to believe. I never would have anticipated anyone remembering it this far down the road.

R.C.: Why is that?

G.M.: At the time I wrote the article, I thought I was just showing a local trend. The integrated circuit was changing the economy of the whole [electronics] industry, and this was not yet generally recognized. So I wrote the article to try to get the point across—this is the way the industry is going to get things really cheap.

R.C.: At that point, the integrated circuit was still fairly new.

G.M.: The integrated circuit had been around a few years. The first few had hit the market with as many as about 30 components on the chip—the transistors, resistors, and so forth. I looked back to the beginning of the technology I considered fundamental—the planar transistor—and noticed that the [number of components] had about doubled every year. And I just did a wild extrapolation saying it’s going to continue to double every year for the next 10 years.

And it proved to be amazingly correct. I had a colleague who saw that and dubbed this Moore’s Law. It’s been applied to far more than just semiconductors. Sort of anything that changes exponentially these days is called Moore’s Law. I’m happy to take credit for all of it.

R.C.: You spoke to a colleague of mine after winning the 2008 IEEE Medal of Honor, and I believe you told her that you didn’t want Moore’s Law to be your legacy. You’d moved on to other things.

G.M.: Well, I couldn’t even utter the term “Moore’s Law” for a long time. It just didn’t seem appropriate. But as it became something that almost drove the semiconductor industry rather than just recording its progress, I became more relaxed about the term.

R.C.: How long did it take to come to terms with having a law named after you?

G.M.: Oh, 20 years or so. It really took a long time. But [now] it is well established. A while back I googled “Moore’s Law” and “Murphy’s Law” and discovered that Moore’s Law had more references than Murphy.

R.C.: Did that feel like a coup of some sort?

G.M.: I think so. It’s about as profound a law [as Murphy’s Law] too.

R.C.: Coming from a science background, when I think of laws, I think of ironclad, mathematically grounded laws of nature. And Moore’s Law is…

G.M.: It’s not a law in any real respect. It was an observation and a projection.

R.C.: Technological improvements are nothing new, but the rapid progress that’s been made under Moore’s Law has been pretty special. Is there something fundamentally different about the nature of silicon?
photo of Gordon Moore lecturing
Photo: Intel

Moore’s Big Move: Moore [above] wrote his seminal 1965 paper while working at Fairchild Semiconductor. Just three years later, he and colleague Robert Noyce left to cofound Intel.

G.M.: The semiconductor technology has some unique characteristics that I don’t see duplicated many other places. By making things smaller, everything gets better. The performance of devices improves; the amount of power dissipated decreases; the reliability increases as we put more stuff on a single chip. It’s a marvelous deal.

I used to give talks about how other industries might have progressed. You know, had the auto industry made progress at the same rate [as silicon microelectronics], you would have gotten a million miles per gallon of fuel, had cars that could go several hundred thousand miles an hour. It’d be more expensive to park [one] downtown for the night than to buy a new Rolls-Royce. And one of the members of the audience pointed out, yeah, but it’d only be 2 inches long and a half-inch high; it wouldn’t be much good for your commute.

R.C.: You’ve predicted the end of Moore’s Law several times in the past. How long do you think it will continue?

G.M.: Well, I have never quite predicted the end of it. I’ve said I could never see more than the next couple of [chip] generations, and after that it looked like [we’d] hit some kind of wall. But those walls keep receding. I’m amazed at how creative the engineers have been in finding ways around what we thought were going to be pretty hard stops. Now we’re getting to the point where it’s more and more difficult, and some of the laws are quite fundamental. I remember we had Stephen Hawking, the famous cosmologist, in Silicon Valley one time. He gave a talk, and afterward he was asked what he saw as the limits to the integrated circuit technology.

Now this is not his field, but he came up with two things: the finite velocity of light and the atomic nature of materials. And I think he’s right. We’re very close to the atomic limitation now. We take advantage of all the speed we can get, but the velocity of light limits performance. These are fundamentals I don’t see how we [will] ever get around. And in the next couple of generations, we’re right up against them.

R.C.: What happens then, once you’ve reached those limits?

G.M.: Well, things change when we get to that point. No longer can we depend on making things smaller and higher density. But we’ll be able to make several billion transistors on an integrated circuit at that time. And the room this allows for creativity is phenomenal. Now there are other technologies that are proposed to extend beyond what we can do with silicon. Some of the things coming out of nanotechnology may have a role to play, and materials like graphene, a single layer of carbon hexagons, are very interesting. I’m not close enough to predict that any of them is going to be successful, but they have a tough competitor. [Multiple] billion transistors on a silicon chip is hard to beat.

R.C.: So do you think the kind of progress we expect from chips will change?

G.M.: Some things will change. We won’t have the rate of progress that we’ve had over the last few decades. I think that’s inevitable with any technology; it eventually saturates out. I guess I see Moore’s Law dying here in the next decade or so, but that’s not surprising.

R.C.: Do you think the way we consume electronics will change as Moore’s Law comes to a close?

G.M.: I don’t think it’s likely to change much. As long as the new products offer incremental capability, I think they will replace the older ones pretty rapidly. When we run out of ideas of what to add, then people may decide they don’t need a new one every year, hang on to the same piece of equipment for three, four, five years. That’ll slow down the industry quite a bit. But I think it’s inevitable that something like that occur.

R.C.: There’re the fundamental physical limits—the atomic scale, the speed of light—and then there’s also the cost associated with fabricating smaller and smaller transistors. Which do you think we’ll hit first? Is it going to be the cost or the fundamental physical limits? I guess they’re tied together.

G.M.: They really are, yeah. Making things smaller is increasingly expensive. Fabs to operate on the newest technology nodes are absurd. It’s hard to think Intel started with [US] $3 million total capital. Now you can’t buy one tool, you can’t even install one tool for that much, I don’t think. The machines have gotten a lot more expensive and complex. On the other hand, their productivity in terms of transistors out per unit time has increased dramatically. So we can still afford to build a few fabs to utilize the modern technology.

We’ve had a lot of companies decide it was too expensive to move to the next generation already. There are only a few of us in the world that are investing in state-of-the-art fab facilities today. And I don’t see that number changing much over the next generation or two.

R.C.: Your initial prediction was largely based on the idea that the cost of each component on a chip was decreasing. So is that going to be the thing that decides it in the end? It’s an economic law, so it’ll have an economic demise?

G.M.: I think it’s going to be a technological demise rather than an economic one. People will continue to squeeze cost out of the products for quite a while after they can’t make them any smaller. I’m sure that’s happening already.

R.C.: I told a few people that I was going to meet you today, and I asked them what questions I should ask. Some just sort of laughed and said, “Can you ask him how we get out of this mess?” Because they’re all struggling with these technological issues.

G.M.: Whoo. Well, you could always retire and move to Hawaii.

R.C.: I think they’re trying to get to that point.

G.M.: Yeah, well, it’s the nature of the business. There aren’t many easy businesses, and this certainly isn’t one of them.

This interview has been edited for length and clarity. It originally appeared in print as “The Law That’s Not a Law.”.

Dismantling Fukushima: The World’s Toughest Demolition Project

Taking apart the shattered power station and its three melted nuclear cores will require advanced robotics

A radiation-proof superhero could make sense of Japan’s Fukushima Daiichi nuclear power plant in an afternoon. Our champion would pick through the rubble to reactor 1, slosh through the pooled water inside the building, lift the massive steel dome of the protective containment vessel, and peek into the pressure vessel that holds the nuclear fuel. A dive to the bottom would reveal the debris of the meltdown: a hardened blob of metals with fat strands of radioactive goop dripping through holes in the pressure vessel to the floor of the containment vessel below. Then, with a clear understanding of the situation, the superhero could figure out how to clean up this mess.

Unfortunately, mere mortals can’t get anywhere near that pressure vessel, and Japan’s top nuclear experts thus have only the vaguest idea of where the melted fuel ended up in reactor 1. The operation floor at the top level of the building is too radioactive for human occupancy: The dose rate is 54 millisieverts per hour in some areas, a year’s allowable dose for a cleanup worker. Yet, somehow, workers must take apart not just the radioactive wreck of reactor 1 but also the five other reactors at the ruined plant.

This decommissioning project is one of the biggest engineering challenges of our time: It will likely take 40 years to complete and cost US $15 billion. The operation will involve squadrons of advanced robots, the likes of which we have never seen.

Nothing has been the same in Japan since 11 March 2011, when one of history’s worst tsunamis flooded Fukushima Daiichi, crippled its emergency power systems, and triggered a series of explosions and meltdowns that damaged four reactors. A plume of radioactive material drifted over northeast Japan and settled on towns, forests, and fields, while plant workers scrambled to pour water over the nuclear cores to prevent further radioactive releases. Nine months later, the Tokyo Electric Power Co. (TEPCO), the utility company that operates the plant, declared the situation stable.

Stability is a relative concept: Although conditions at Fukushima Daiichi aren’t getting worse, the plant is an ongoing disaster scene. The damaged reactor cores continue to glow with infernal heat, so plant employees must keep spraying them with water to cool them and prevent another meltdown. But the pressure vessels and containment vessels are riddled with holes, and those leaks allow radioactive water to stream into basements. TEPCO is struggling to capture that water and to contain it by erecting endless storage tanks. The reactors are kept in check only by ceaseless vigilance.

TEPCO’s job isn’t just to deal with the immediate threat. To placate the furious Japanese public, the company must clean up the site and try to remove every trace of the facility from the landscape. The ruin is a constant reminder of technological and managerial failure on the grand scale, and it requires a proportionally grand gesture of repentance. TEPCO officials have admitted frankly that they don’t yet know how to accomplish the tasks on their 40-year road map, a detailed plan for decommissioning the plant’s six reactors. But they know one thing: Much of the work will be done by an army of advanced robots, which Japan’s biggest technology companies are now rushing to invent and build.

Fukushima Map 
The Site: During the 2011 accident, reactors 1, 2, and 3 suffered partial meltdowns. Explosions shattered reactor buildings 1, 3, and 4. Reactors 5 and 6 are undamaged.

Here’s some more bad news: Chernobyl and Three Mile Island, the only other commercial-scale nuclear accidents, can’t teach Japan much about how to clean up Fukushima Daiichi. The Chernobyl reactor wasn’t dismantled; it was entombed in concrete. The Three Mile Island reactor was defueled, but Lake Barrett, who served as site director during that decommissioning process, says the magnitude of the challenge was different. At Three Mile Island the buildings were intact, and the one melted nuclear core remained inside its pressure vessel. “At Fukushima you have wrecked infrastructure, three melted cores, and you have some core on the floor, ex-vessel,” Barrett says. Nothing like Fukushima, he declares, has ever happened before.

Barrett, who is now a consultant for the Fukushima cleanup, says TEPCO is taking the only approach that makes sense: “You work from the outside in,” he says, dealing with all the peripheral problems in the buildings before tackling the heart of the matter, the melted nuclear cores. During the first three years of the cleanup, TEPCO has been surveying the site to create maps of radiation levels. The next step is removing radioactive debris and scrubbing radioactive materials off walls and floors. Spent fuel must be removed from the pools in the reactor buildings; leaks must be plugged. Only then will workers be able to flood the containment structures so that the melted globs of nuclear fuel can safely be broken up, transferred to casks, and carted away.

Many of the technologies necessary for the decommissioning already exist in some form, but they must be adapted to fit the unique circumstances of Fukushima Daiichi. “It’s like in the 1960s, when we wanted to put a man on the moon,” says Barrett. “We had rocketry, we had physics, but we had never put all the technologies together.” Just as with the moon shot, there is no guarantee that this epic project can be accomplished. But faced with the wrath of the Japanese people, TEPCO has no choice but to try.

Reactor Graphic 

To begin the first step—inspection—TEPCO sent in robots to map the invisible hot spots throughout the smashed reactor buildings. The first to arrive were the U.S.-made PackBot and Warrior, hastily shipped over from iRobot Corp. of Bedford, Mass. But Japan is justly proud of its own robotics industry, so the question arose, Why didn’t TEPCO have robots ready to respond in a nuclear emergency? Yoshihiko Nakamura, a University of Tokyo robotics professor, has the dispiriting answer. The government did fund a program on robotics for nuclear facilities in 2000, following a deadly accident at a uranium reprocessing facility. But that project was shut down after a year. “[The government] said this technology is immature, and it is not applicable for the nuclear systems, and the nuclear systems are already 100 percent safe,” Nakamura explains. “They didn’t want to admit that the technology should be prepared in case of accident.”

Still, some roboticists in Japan carried on their own research despite the government’s indifference. In the lab of Tomoaki Yoshida, a roboticist at the Chiba Institute of Technology, near Tokyo, robots have learned to crawl over rubble and to climb up and down steps. These small tanks roll on a flexible series of treads, which can be lifted or lowered individually to allow the bot to manage stairs.

After the Fukushima accident, Yoshida’s academic research became very relevant. With seed money from the government, he constructed two narrow metal staircases proportioned like the 5-floor staircases inside the Fukushima Daiichi reactor buildings. This allowed Yoshida to determine whether his bots could navigate those cramped stairs and tight turns. His acrobatic Quince robots proved themselves able, and after hundreds of tests they received TEPCO’s clearance for field operations. In the summer of 2011, the Quince bots became the first Japanese robots to survey the reactor buildings.

The Quinces were equipped with cameras and dosimeters to identify radioactive hot spots. But the robots struggled with a communication issue: The nuclear plant’s massive steel and concrete structures interfere with wireless communication, so the Quinces had to unspool cables behind them to receive commands and transmit data to their operators. The drawback of that approach soon became apparent. One Quince’s cable got tangled and damaged on the third floor of reactor 2, and the lonely bot is still sitting there to this day, waiting for commands that can’t reach it.

FuRobotSlideshow 

Back at Yoshida’s lab, where modest bunk beds bespeak the dedication of his students, the team is currently working on a new and improved survey bot namedSakura. To guard against future tangles, Sakura not only unspools cable behind, it also automatically takes up the slack when it changes direction. It’s waterproof enough to roll through puddles, and it can carry a heavy camera capable of detecting gamma radiation. The bot can tolerate that radiation: Yoshida’s team tested its electronics (the CPU, microcontrollers, and sensors) and found that they’re radiation-tolerant enough to perform about 100 missions before any component is likely to fail. However, the robot itself becomes too radioactive for workers to handle. Sakura must therefore take care of itself: It recharges its batteries by rolling up to a socket and plugging itself in.

The second step in the Fukushima decommissioning is decontamination, because only when that is complete will workers be able to get inside to tackle more complex tasks. The explosions that shattered several of the reactor structures sprayed radioactive materials throughout the buildings, and the best protective suits for workers in hot zones are of little use against the resulting gamma radiation—a worker would have to be covered from head to toe in lead as thick as the width of a hand.

After the accident, the Japanese government called for robots that could work on decontamination, and several of Japan’s leading companies rose to the challenge. Toshiba and Hitachi have designed robots that use jets of high-pressure water and dry ice to abrade the surfaces of walls and floors; the robots will scour away radioactive materials along with top layers of paint or concrete and vacuum up the resulting sludge. But the robots’ range is defined by their own communication cables, and they can carry only limited amounts of their cleaning agents. Another bot, the Raccoon, has already begun nosing across the floor in reactor building 2, trailing long hoses behind it to supply water and suction.

To clear a path for the robotic janitors, another class of robots has been invented to pick up debris and cut through obstacles. The ASTACO-SoRa, from Hitachi, has two arms that can reach 2.5 meters and lift 150 kilograms each. The tools on the ends of the arms—grippers, cutting blades, and a drill—can be exchanged to suit the task. However, Hitachi’s versatile bot is limited to work on the first floor, as it can’t climb stairs.

FukushimaPool1 FukushimaPool2 FukushimaPool3 
Out Of The Pool: Spent fuel pools inside the damaged reactor buildings contain hundreds of nuclear fuel assemblies. TEPCO is emptying reactor 4’s pool [top] first. In the extraction process, a cask is lowered into the pool and filled with radioactive fuel assemblies. Then the cask is transported to a safer location, lowered into another pool [middle], and unloaded. The job is made more complicated because some of the assemblies are covered with debris [bottom] from the accident’s explosions.

Removing spent fuel rods is the third step. Each reactor building holds hundreds of spent fuel assemblies in a pool on its top floor. These unshielded pools, perfectly safe when filled with water, became a focus of public fear during the Fukushima Daiichi accident. After reactor building 4 exploded on 15 March, many experts worried that the blast had damaged the structural integrity of that building’s pool and allowed the water to drain out. The pool was soon determined to be full of water, but not before the chairman of the U.S. Nuclear Regulatory Commission had caused an international panic bydeclaring it dry and dangerous. The reactor 4 pool became one of TEPCO’s urgent decommissioning priorities, not only because it’s a real vulnerability but also because it’s a potent reminder of the accident’s terrifying first days.

The process of emptying that poolbegan in November 2013. TEPCO workers use a newly installed cranelike machine to lower a cask into the pool, then long mechanical arms pack the submerged container with fuel assemblies. The transport cask, fortified with shielding to block the nuclear fuel’s radiation, is lowered to a truck and brought to a common pool in a more intact building. The building 4 pool contains 1533 fuel assemblies, and moving them all to safety is expected to take a year. The same procedure must be performed at the highly radioactive reactors 1, 2, and 3 and the undamaged (and less challenging) reactors 5 and 6.

Slideshow: Living with Fear in Fukushima

LivingFearSlideshow 

Containing the radioactive water that flows freely through the site is the fourth step. Every day, about 400 metric tons of groundwater streams into the basements of Fukushima Daiichi’s broken buildings, where it mixes with radioactive cooling water from the leaky reactor vessels. TEPCO treats that water to remove most of its radioactive elements, but it can’t be rendered entirely pure—and as a result local fishermen have protested plans to release it into the sea. To store the accumulating water, TEPCO has installed more than 1000 massive tanks, which themselves must be monitored vigilantly for leaks.

TEPCO hopes to stop the flow of groundwater with a series of pumps and underground walls, including an “ice wall” made of frozen soil. Still, at some point the Japanese public must grapple with a difficult question: Can the stored water ever be released into the sea? Barrett, the former site director of Three Mile Island, has argued publicly that the processed water is safe, as contamination is limited to trace amounts of tritium, a radioactive isotope of hydrogen. Tritium is less dangerous than other radioactive materials because it passes quickly through the body; after it’s diluted in the Pacific, Barrett says, it would pose a negligible threat. “But releasing that water is an emotional issue, and it would be a public relations disaster,” he says. The alternative is to follow the Three Mile Island example and gradually dispose of the water through evaporation, a process that would take many years.

TEPCO must also plug the holes in the reactor vessels that allow radioactive cooling water to flow out. Many of the leaks are thought to be in the suppression chambers, doughnut-shaped structures that ring the containment vessel and typically hold water, which is used to regulate temperature and pressure inside the pressure vessel during normal operations. Shunichi Suzuki, TEPCO’s general manager of R&D for the Fukushima Daiichi decommissioning, explains that one of his priorities is developing technologies to find the leak points in the suppression chambers.

SidebarImage 

“There are some ideas for a submersible robot,” Suzuki says, “but it will be very difficult for them to find the location of the leaks.” He notes that both the suppression chambers and the rooms that surround them are now filled with water, so there’s no easy way to spot the ruptures; it’s not like finding the hole in a leaky pipe that’s spraying water into the air. Among the robot designs submitted by Hitachi, Mitsubishi, and Toshiba is one bot that would crawl through the turbid water and use an ultrasonic sensor to find the breaches in the suppression chambers’ walls.

If robots prove impractical, TEPCO may take a more heavy-handed approach and start pouring concrete into the suppression chamber or the pipes that lead to it. “If it’s possible to make a seal between the containment vessel and the suppression chamber, then the leaks don’t matter,” Suzuki says. One way or another, TEPCO hopes to have all the leaks stopped up within three years. Sealing the leaks is a necessary precondition for the final and most daunting task.

FukushimaWater1 FukushimaWater2 
Photos, top: TEPCO; bottom: The Yomiuri Shimbun/AP Photo
Water, Water Everywhere: Groundwater flowing through the site mixes with radioactive cooling water leaking from reactor buildings and must therefore be stored and treated. To contain the accumulating water, TEPCO is filling fields with storage tanks [bottom]. These tanks must be monitored for leaks [top]. In August 2013, TEPCO admitted that 300 metric tons of contaminated water had leaked from one tank.

Removing the three damagednuclear cores is the last big step in the decommissioning. As long as that melted fuel glows inside reactors 1, 2, and 3, Fukushima Daiichi will remain Japan’s ongoing nightmare. Only once the fuel is safely packed up and carted away can the memory begin to fade. But it will be no easy task: TEPCO estimates that removing the three melted cores will take 20 years or more.

First, workers will flood the containment vessels to the top so that the water will shield the radioactive fuel. Then submersible robots will map the slumped fuel assemblies within the pressure vessels; these bots may be created by adapting those used by the petroleum industry to inspect deep-sea oil wells. Next, enormously long drills will go into action. They must be capable of reaching 25 meters down to the bottoms of the pressure vessels and breaking up the metal pooled there. Other machines will lift the debris into radiation-shielded transport casks to be taken away.

Making the task more complicated is the design of the reactors. They have control rods that project through the bottom of the pressure vessels, and the entry point for each of those control rods is a weak spot. Experts believe that most of the fuel in reactor 1, and some in reactors 2 and 3, leaked down through those shafts to pool on the floor of the containment vessel below. To reach that fuel, some 35 meters down, TEPCO workers will have to drill through the steel of the pressure vessel and work around a forest of wires and pipes.

Before TEPCO can even develop the proper fuel-handling tools, Suzuki says, the company must get a better understanding of the properties of the corium—the technical term for the mess of metals left behind after a meltdown. The company can’t just copy the drills that broke up the melted core of the Three Mile Island reactor, says Suzuki. “At Three Mile Island, [the core] remained in the pressure vessel,” he says. “In our case, it goes through the pressure vessel, so it melted stainless steel. So our fuel debris must be harder.” The melted fuel may also have a lavalike consistency, with a hard crust on top but softer materials inside. TEPCO is now working with computer models and is planning to make an actual batch of corium in a laboratory to study its properties.

When the core material is broken up and contained, it will be whisked away to some to-be-determined storage facility. Over the decades its radioactivity will gradually fade, along with the Japanese public’s memory of the accident. It’s a shame that those twisted blobs of corium are too dangerous to be displayed in a museum, where a placard could explain that we human beings are so clever, we’re capable of building machines we can’t control.

Depending on whom you ask, nuclear power stations like Fukushima Daiichi are exemplars of either humanity’s ingenuity or hubris. But, the museum placard might add, these metallic blobs, plucked from the heart of an industrial horror, prove something else—that we humans also have the grit and perseverance to clean up our mistakes.

Source: IEEE Spectrum March 2014

.

New Probes Replace Surgeons’ Sense of Touch

Laparoscopic pressure sensors make 3-D maps of tumors

Surgeons’ best tools for locating tumors inside the body are often their hands. But during minimally invasive surgeries—which can reduce recovery time by days—the ability to examine tissue through touch, called palpation, is lost. Instead, surgeons must manipulate the tissue with long, narrow instruments and rely on visual images from tiny cameras. But engineers in the United States, the United Kingdom, and elsewhere have designed new tools to help restore a surgeon’s sense of touch.

The devices, dubbed palpation probes, are designed to be used laparoscopically and can detect changes in the stiffness of tissue. Tumors are harder than normal tissue, so they can be detected with a combination of pressure sensors and spatial positioning measurements. The readings are used to create a three-dimensional stiffness map that shows surgeons the margins of tumors.

A team at Nashville’s Vanderbilt University led by biomechatronics engineerPietro Valdastri showed IEEE Spectrum a wireless probe that a surgeon can manipulate in the body with a laparoscopic tool. The small, cylindrical prototype was banged up and wrapped in tape, looking more like something you might find on the floor of your garage than in a surgical suite. But it’s what’s inside that counts—a pressure sensor, a three-axis accelerometer, a three-axis magnetic field sensor, a battery, and a wireless microcontroller.

It works like this: The capsule’s pressure-sensing tip is used to gently indent the tissue. The magnetic field sensor and accelerometer track the depth of the indentation, along with its position relative to a stationary magnet nearby. Each point of contact transmits information about the stiffness of the tissue at that point. Using an algorithm to fill in any unexplored area, the computer creates a 3-D color-coded map that displays the tumors. Valdastri’s team has been testing their probe on a pig’s liver and on a chunk of synthetic tissue that contains tumorlike lumps.

In the pig liver test, Valdastri’s probe was off by just 8 percent in its stiffness measurement. “This new sensor capsule is quite successful in measuring tissue properties,” says Robert Howe, a professor of engineering at Harvard University who developed some of the first remote palpation technologies in the early 1990s.

whats in your EV? illustration
Image: Storm Lab
Tumors Are Tougher: In a stiffness map of a silicone sample, mock tumors stick out. Doctors could generate and use such maps during minimally invasive surgery.

The novelty of Valdastri’s probe, compared with previous designs, is in its use of a magnetic field sensor to track its position, according toRussell Taylor, director of the Center for Computer-Integrated Surgical Systems and Technology at Johns Hopkins University, in Baltimore. “It’s a very clever way to do it, but it’s certainly not the only way to do it,” he says.

Another group of researchers, this one based at King’s College London and led by robotics expert Kaspar Althoefer, has devised an alternative. Like Valdastri’s prototype, Althoefer’s system tracks the probe’s spatial position, how deeply it indents the tissue, and the reaction force of the tissue. But his design is based on optical fiber technology, and the probe is able to roll over the tissue surface with minimal friction.

Althoefer’s probe consists of three surface-profile sensors equally spaced around a spherical indenter. As the probe glides over a tissue surface, the sphere, which floats on a pocket of air, indents the tissue, and a pair of optical fibers measures the indentation. Another set of optical fibers measures the displacement of the three profile sensors, which move up and down with the tissue surface. The three surface sensors and the spherical indenter work jointly to determine the indentation depth and the force with which the tissue pushes back, making a map of the tissue’s stiffness. The probe has not yet been tested in an animal.

Valdastri’s and Althoefer’s work builds upon nearly two decades of remote palpation research. Previous approaches have focused largely on adding touch sensors to conventional surgical instruments. Many of the technologies have been successfully demonstrated, but none have been commercialized.

The challenge is that the costs of the technologies simply outweigh the benefits to surgeons, say researchers. “If you ask surgeons, they’ll tell you that they need touch feedback, but then they go and perform a vast range of minimally invasive procedures without it,” says Howe. “Touch feedback is in the nice-to-have, not the got-to-have category,” he says.

But Valdastri’s and Althoefer’s probes have some characteristics that might be appealing to surgeons. Because Valdastri’s device is wireless, it is less likely to get in the way of other surgical tools, it doesn’t require a separate incision, and it may allow surgeons to reach places they can’t with a rigid or wired instrument. And because Althoefer’s probe glides over the tissue surface with almost no friction, it is less likely to damage the tissue.

But there are also practical hurdles to address in the quest for perfect palpation. Rajni Patel, a robotics researcher at the University of Western Ontario, in Canada, has developed a pressure sensing array to palpate the delicate tissues of the lung. But one major challenge, he says, has been designing an array that can withstand the surgical sterilization process.

Researchers integrating palpation technologies into surgical robots are also still sorting out how to display the many layers of information to the robot’s operator. That has been a challenge for biomedical engineer Tim Salcudean at the University of British Columbia, in Vancouver. Using a technique dubbed vibro-elastography, he is generating 3-D elasticity maps by exciting tissue with vibration and measuring the waves produced using an ultrasound elastography probe. The probe is held by a surgical robot, which tracks its position. Keeping track of the elasticity map while performing surgery is a lot of work for the surgeon, he says: “How do we display? Is it an overlay or an adjacent image? How do we simplify controls? It is all very challenging, but seeing through the patient is the future of surgery.”

It might not be long before feeling around inside the body during surgery is considered old-fashioned. “The new generation of surgeons are not doing a lot of open surgery, so they’re not used to palpation,” says Salcudean. For those surgeons, computer-generated tissue maps might be all they need to “feel” a tumor.

Source: IEEE Spectrum February 2014

.

Spin Trick Could Make OLED Displays Cheaper

Molecules manipulate electron spin to increase light emission without the aid of iridium

A new way of coaxing light out of an organic LED may make for cheaper displays and could even provide a way to see magnetic fields.

By choosing a molecule of a particular shape, a team of German and American researchers designed a new type of OLED that has the potential to emit as much light as a commercial OLED, but without the rare metals normally added to make the devices efficient. If manufacturers could leave out metals such as iridium or platinum, they might not have to worry about potential shortages of these elements. This would allow them to bring down the costs of OLEDs, which are increasingly being used in the screens of smartphones and televisions, as well as in solid-state lighting.

OLEDs convert only a fraction of the electrical current they receive into light. That’s because of a quantum mechanical property of the electrons and their positive quasiparticle counterpart holes, called spin. Spin is what gives rise to magnetism; when the spins of a majority of electrons in a material point in the same direction, the material is magnetic. Each electron and each hole can exist in one of two spin states, so when they pair up there are four possible states, three of which dissipate their energy as heat rather than light. This means that only a quarter of the electricity put into the device produces light. OLED makers add metal to the hydrocarbon molecules in their devices to mix up the spin states and increase the number of light-generating combinations.

But it’s also possible for some of the spins to switch to the more light-friendly version on their own, says John Lupton, a physicist at the University of Regensburg, in Germany, who was part of the team that did the work. “You just have to wait long enough for this spin state to spontaneously relax,” he says.

If an electron-hole pair can be held in its electrically excited state long enough for one spin to flip, when it drops to a lower energy state it’ll emit the extra energy as a photon rather than as heat, Lupton explains. “Long enough” is measured in milliseconds, compared with the nanoseconds it usually takes for emission to occur, a difference of six orders of magnitude.

The trick, which the researchers from Regensburg, the University of Bonn, the University of Utah, and MIT explain in a paper published online byAngewandte Chemie, lies in the shape of the organic molecules used in the LED. The researchers developed two kinds of polycyclic aromatic hydrocarbons (carbon-based molecules having multiple ring-shaped components): one called phenazine and the other triphenylene. The atomic structures of the molecules were such that they trapped the charge carriers long enough for the spontaneous change to occur. A peculiarity of the molecule–the researchers speculate that it may have to do with different energy characteristics in different areas of the molecule–prevents the extra energy from being released as heat before the spin can flip. “Because of their shape, you can localize the electron density and therefore the spin density in specific areas in the molecules,” Lupton says, adding that the result was surprising to the researchers, who were actually studying other facets of OLEDs. “If you look at textbooks of physical chemistry, this should not have actually worked,” he says.

Last year another group, led by Chihaya Adachi at Kyushu University, in Japan, demonstrated its own method for making OLEDs without the use of metal. The researchers designed a molecule that relies on fluorescence for light emission, instead of the phosphorescence used in most metal-reliant OLEDs. Their mechanism, called thermally activated delayed fluorescence, takes the heat released by some of the charge carriers and uses it to bump up the energy level of others to the point that they emit light instead of heat when they relax. The Kyushu scientists say their material has an internal efficiency of nearly 100 percent, which translates into about 14 percent of the electricity pumped into the OLED reemerging as light, compared with about 20 percent in commercial OLEDs that use metal.

Alán Aspuru-Guzik, a chemist at Harvard, sees the work by Adachi and Lupton as developments “that could lead to a breakthrough in the field of OLEDs.” The work both groups are doing could lead to better emission in the blue end of the spectrum in particular. Blue organic light emitters have been the most difficult to develop.

At the moment, the price of rare metals needed in OLED manufacturing is so low that manufacturers are unlikely to need either trick soon. And this new method doesn’t actually improve OLED efficiency over what existing devices can deliver, says Lupton. But it could prove appealing to OLED manufacturers if the price of iridium goes up.

Far more interesting may be another aspect of Lupton’s discovery: The spins of the charge carriers in this type of OLED control the wavelength of the light emitted. “You can actually measure the spin of the electron within your OLED by using the color,” Lupton says. And because spin on the quantum level translates into magnetism in the macro world, “if you design this in the right way you could make it incredibly sensitive to magnetic fields,” Lupton says. “That will actually allow us to use the color of the OLED as a compass.” He says such a device could be more sensitive than Hall effect devices, which are common in smartphone navigation systems and in automobiles, where they measure the rotation of the wheels.

The research might even provide a new clue to how birds navigate using Earth’s magnetic field. While the exact mechanism is still debated, some scientists suspect that their eyes let them see shifts in the field as subtle variations of color. Lupton says they may be sensitive to color changes caused by alterations in the spins of electrons.

Source : IEEE Spectrum 2014

.

Spinal Stimulation Gets Paralyzed Patients Moving

Implanted electrodes can reach where the brain cannot

[youtube id=”_rC1MfhIOGs” width=”600″ height=”350″]

Dustin Shillcox fully embraced the vast landscape of his native Wyoming. He loved snowmobiling, waterskiing, and riding four-wheelers near his hometown of Green River. But on 26 August 2010, when he was 26 years old, that active lifestyle was ripped away. While Shillcox was driving a work van back to the family store, a tire blew out, flipping the vehicle over the median and ejecting Shillcox, who wasn’t wearing a seat belt. He broke his back, sternum, elbow, and four ribs, and his lungs collapsed.

Through his five months of hospitalization, Shillcox’s family remained hopeful. His parents lived out of a camper they’d parked outside the Salt Lake City hospital where he was being treated so they could visit him daily. His sister, Ashley Mullaney, implored friends and family on her blog to pray for a miracle. She delighted in one of her first postaccident communications with her brother: He wrote “beer” on a piece of paper. But as Shillcox’s infections cleared and his bones healed, it became obvious that he was paralyzed from the chest down. He had control of his arms, but his legs were useless.

At first, going out in public in his wheelchair was difficult, Shillcox says, and getting together with friends was awkward. There was always a staircase or a restroom or a vehicle to negotiate, which required a friend to carry him. “They were more than happy to help. The problem was my own self-confidence,” he says.

A few months after being discharged from the hospital, in May 2011, Shillcox saw a news report announcing that researchers had for the first time enabled a paralyzed person to stand on his own. Neuroscientist Susan Harkema at the University of Louisville, in Kentucky, used electrical stimulation to “awaken” the man’s lower spinal cord, and on the first day of the experiments he stood up, able to support all of his weight with just some minor assistance to stay balanced. The stimulation also enabled the subject, 23-year-old Rob Summers, to voluntarily move his legs in other ways. Later, he regained some control of his bladder, bowel, and sexual functions, even when the electrodes were turned off.

The breakthrough, published in The Lancet, shocked doctors who had previously tried electrically stimulating the spinal nerves of experimental animals and people with spinal-cord injuries. In decades of research, they had come nowhere near this level of success. “This had never been shown before—ever,” says Grégoire Courtine, who heads a lab focused on spinal-cord repairat the Swiss Federal Institute of Technology in Lausanne and was not involved with the project. “Rob’s is a pioneer recovery. And what was surprising to me was that his was better than what we’ve seen in rats. It was really exciting for me to see.”

11OLSpinalStimulationXRay
Image: Susan Harkema/The Lancet/Elsevier
Embedded hardware:  This X-ray image of one of Susan Harkema’s patients shows the electrode array implanted next to his spinal cord and the pulse generator below.

The report brought renewed hope for people living with paralysis. The prognosis is normally grim for someone like Shillcox, who has a“motor complete” spinal-cord injury. That level of damage usually results in a total loss of function below the injury site.

Teams of scientists have been working on transplanting stem cells for neural repair and modifying the spinal cord in other ways to encourage it to grow new neurons, but these long-term approachesremain mostly in the lab. Harkema’s breakthrough, however, produced a real human success story and gives hope to paralyzed people everywhere. It presents a viable means of regaining bowel, bladder, and sexual functions, and maybe—just maybe—points the way toward treatments that could give paralyzed people the ability to walk again.

But Harkema’s first experiment involved only one patient, and many researchers wondered whether the improvement they saw in Summers was an anomaly. “The next big question was, Will you ever see these things in more than one subject?” says neurobiologist V. Reggie Edgerton of the University of California, Los Angeles, a collaborator in the Louisville experiments.

The U.S. Food and Drug Administration (FDA) had given Harkema the go-ahead to try the technique in four more paralyzed people. Shillcox put his name in the pool the night he saw the news report. He was selected, and in July 2012 he packed his wheelchair into his retrofitted Dodge Journey and drove himself from Green River to Louisville to begin 18 months of experiments.

The circuitry of the lower spinal cord is impressively sophisticated. Neuroscientists believe that the brain merely provides high-level commands for major functions, like walking. Then the dense neural bundles in the lower spinal cord take over the details of coordinating the muscles, allowing the brain to focus on other things. That division of labor is what lets you navigate a party and focus on the conversation rather than on your steps. After aspinal-cord injury, damage prevents the high-level signal from the brain from reaching the neurons below. Yet those neural bundles remain intact and are just waiting to receive a signal to start the muscles working. Stimulating the lower spinal cord with electrodes can awaken that circuitry and get it functioning, astonishingly, without instructions from the brain.

patient no. 3
Photo: Greg Ruffing
Patient No. 3: Andrew Meas practices standing independently during an electrical stimulation session at the Frazier Rehab Institute, in Louisville, Ky. Neuroscientist Susan Harkema [standing at right] varies the electrical signals sent to Meas’s spinal cord to control different muscle groups.

It has been known since the mid-1970s that direct stimulation of the spinal cord can actually induce the legs to move as if they were taking steps, without any input from the brain. Edgerton and other researchers have demonstrated the concept definitively in paralyzed cats, rats, and a few humans. But in most of these demonstrations, researchers were blasting a large amount of electrical current into the body to force the muscles to move. “Everyone, including us, was hung up on the idea that you have to stimulate at this high level to induce the movement,” says Edgerton. What they missed was that the stimulation was essentially overwhelming the neurons in the lower spinal cord and was actually interfering with their ability to process sensory information that can help the body move on its own.

The neurons in the spinal cord don’t only receive signals from the brain; they also process sensory feedback from the body as the muscles move and balance shifts. The importance of that sensory feedback gradually emerged with some animal experiments Edgerton reported in Nature Neuroscience in 2009. The study suggested that sensory input could actually control the motor commands produced by the spinal cord.

Harkema, a former student of Edgerton’s, ran with that concept. In her experiments with Summers, she stimulated his spinal cord just enough to wake it up and then let the sensory input do its thing. “It’s like putting a hearing aid on the spinal cord,” says Edgerton. “We’ve changed the physiological properties of the neural network so that now it can ‘hear’ the sensory information much better and can learn what to do with it.”

Harkema’s group uses an off-the-shelf neurostimulation system—made by Minneapolis-based Medtronic—that’s FDA approved for pain management. The system’s array of 16 electrodes is surgically implanted in the epidural space next to the outermost protective layer of the spinal cord. The array is then connected to a pulse generator (which resembles a pacemaker) that’s implanted nearby. Finally, the pulse generator receives a wireless signal from a programming device outside the body.

The array spans approximately six spinal-cord segments, the ones generally responsible for movement in the lower half of the body. By placing the electrodes over them, the researchers can generate a response in the corresponding muscle groups. Electrode 5, for example, is located near a segment of the spinal cord that controls hip muscles. Electrode 10 is located at the bottom of the array, over the segment that controls the lower leg.

Each of the array’s 16 electrodes can be set to act as a cathode or an anode or be completely shut off. Stimulation intensities can range from 0 to 10.5 volts with pulses sent at frequencies ranging from 2 to 100 hertz, although the researchers usually don’t go beyond 45 Hz. Picking the right combination of electrodes and stimulation parameters to generate a simple response in a single muscle is relatively straightforward. But generating a complex behavior like standing, which involves many muscle groups and a considerable amount of sensory feedback, is far more difficult. Choosing the right electrode configurations for standing requires both a tremendous amount of intuition and plenty of trial and error. “That’s the challenge: to create the electrical field that’s going to give you the desired behavior,” says Harkema.

OLHRGregRuffing555OLHRGregRuffing0603OLHRGregRuffing646OLGregRuffing630
Photos: Greg Ruffing
Balancing Act: Dustin Shillcox works on balance and core muscle control at the Frazier Rehab Institute, in Louisville, Ky. The spinal stimulation system is turned on throughout the exercises.

On a Wednesday in February of this year, Shillcox arrived at theFrazier Rehab Institute in downtown Louisville for one of his first stimulation sessions. The array and pulse generator had been implanted a few weeks before. He wore Nike sneakers and black gym shorts, revealing thin legs atrophied from lack of use.

Shillcox joined Harkema and her team in a large room equipped with custom rehabilitation equipment. He wheeled himself to a three-sided stand Harkema had made out of metal pipes that she’d bolted to a piece of plywood. Researchers taped 14 sensors to Shillcox’s legs. Usingelectromyography (EMG), these sensors would measure the electrical activity produced by his muscles and indicate how Shillcox was responding to the stimulation. Two trainers hoisted Shillcox from his wheelchair onto his feet and into the stand. Then they took their positions to keep him upright—one in front of Shillcox with both hands pushing against his knees and the other behind, steadying his hips. Shillcox held onto the stand with his hands, and a bungee cord supported him from behind.

That day, Harkema planned to test new stimulation configurations to see whether one of them would allow Shillcox to stand on his own. She took a seat in front of a screen displaying the EMG signals while two other researchers helped monitor the data from other screens. To start the session, Harkema called out the electrode settings: “1+, 2+, 3+, 9+, 14+, 12+, 13+, 6+, 7–, 8–, 4–, 10–.” This configuration used 12 of the 16 electrodes, 8 of them as anodes (positively charged) and 4 of them as cathodes (negatively charged). Harkema instructed her team to set the pulsation frequency at 30 Hz and the initial intensity at 1 V and to ramp up by a tenth of a volt at a time. “Left independent,” a trainer called out when the stimulation reached 1.5 V. Shillcox bore his weight on his left leg without assistance for about 30 seconds.

Harkema jotted in her lab book and instructed the team to turn off electrode 10, the one targeting Shillcox’s lower leg. “Going to zero,” a researcher called out. He powered down the system, punched in the new electrode configuration without electrode 10, and ramped it up again. At 2.6 V, Shillcox’s knees buckled. “It shot me out,” Shillcox said. The electrodes hadn’t sent the signal to the legs to stand straight but had twitched his knees forward instead. The stimulation pattern and parameters weren’t quite right.

Harkema tried more configurations, but each time Shillcox felt nothing until Harkema hit a particular voltage threshold, at which point Shillcox’s knees would give way. After 75 minutes, on the 10th and last try, Harkema removed the bungee supporting Shillcox from behind. The muscle activity on the EMG monitors skyrocketed. He’d been balancing so perfectly with the bungee cord that he hadn’t been getting enough external sensory information to activate his muscles, Harkema concluded, so there had been little input flowing back to the lower spinal cord. She instructed her team to devote the next few sessions to the last electrode pattern of the day, but without the bungee.

The technological limitations of the stimulation system make these trials unnecessarily difficult. Each time Harkema changes the configuration of electrodes, she has to turn off the electric field they generate and start over at 0 V. It’s a safety feature of this off-the-shelf stimulator, but it destroys the body’s neural momentum. “You can get really close, and you think the person is almost standing independently, and if you could just shift the field a little you would have it. But you can’t. You have to go to zero. And then everything starts over,” says Harkema. The limitation makes it especially difficult to induce a stepping motion in her patients. “It’s a left-to-right problem. If we get the right leg to step, the left is doing nothing,” she says.

It doesn’t help that there are something like 4.3 x 107 possible electrode patterns she can try and that each can be tried with a range of frequencies and voltages. Without an algorithm to help her choose parameters, Harkema must rely on her experience, some limited neural mapping data, and what she sees on her monitors. “I have to look at the EMG data whizzing by and then make decisions about what I can change out of these 4.3 x 107 combinations to get it better,” says Harkema. She’s gotten pretty good at making adjustments, but she acknowledges that no one can fully interpret the nuances of all that EMG data.

To do better, Harkema has enlisted the help of a handful of engineers who say they can build a stimulation system specifically for her research. At the California Institute of Technology, mechanical engineer Joel Burdick is developing a machine-learning algorithm that aims to take some of the guesswork out of choosing stimulation parameters.

The algorithm is based on statistical methods that predict the patient’s likely response to stimulation patterns—even those that haven’t been tested yet. The prediction part is crucial because there’s no way to try out all the options: There are millions of electrode configurations, and every patient is different. And just to make things even more complicated, patients’ spinal cords change during the course of the stimulation experiments. “The amount of time it would take to test that space is beyond a patient’s lifetime,” says Burdick. So the algorithm has to learn quickly. It must apply reasonable stimulation patterns and then use the patient’s EMG responses to choose better configurations.

Burdick’s team is working with Edgerton’s lab at UCLA to test the algorithm on paralyzed rats. The researchers are starting simply, using just a couple of electrodes and trying to maximize the response in a particular muscle. The first step is to make sure the algorithm is making reasonable decisions. The team has also begun a small human pilot study, Burdick says.

Meanwhile, John Naber, an electrical engineer at the University of Louisville, and a team of engineers are developing a stimulation system that would give Harkema independent control of all 16 electrodes in Medtronic’s array. The design would allow her to transition from one configuration to the next without shutting off the current. The team is building a new pulse generator using off-the-shelf components, and they’ve already written the code and roughed out a design. The challenge, Naber says, will be getting it approved by the FDA in a reasonable amount of time. “It’s not like a commercial integrated circuit or product, because of the FDA requirements for human implants,” Naber says.

The lingering question is whether Medtronic’s 16-electrode array is the best one for Harkema’s work. It was designed to treat pain, so the current diffuses rather broadly. Yu-Chong Tai, an electrical engineer at Caltech, thinks that an array with smaller electrodes arranged more densely might offer the precise stimulation needed after spinal-cord injury. The prototype he’s testing in rats has 27 electrodes arranged over a 2-centimeter-long array. A human version would be similar in size to Medtronic’s (about 5 cm long) but would contain hundreds of electrodes. Of course, more electrodes would mean exponentially more configuration options. “If we give them more electrodes, they will need a smart algorithm,” says Tai.

Until Naber and Tai’s prototypes can be approved by the FDA and Burdick’s algorithm can be fine-tuned, the Medtronic system will have to suffice. That may limit what Harkema can achieve when she puts Shillcox and her other research subjects on the stimulator, especially in terms of stepping. Even so, Shillcox has reason to hope that the experiments will boost his quality of life. Rob Summers, Harkema’s first subject, says his perspective on life has greatly improved since he regained bladder, bowel, and sexual functions. “This project has given me my freedom back,” he says.

Research subjects No. 2 and No. 3 have completed their initial trials. Like Summers, both were able to stand while on the stimulator, as Harkema and her colleagues reported at a Society for Neuroscience meeting in 2012. The researchers have not publicly announced whether other voluntary movement and physiological functions, such as bladder control, have returned for those individuals.

Shillcox—subject No. 4—remains hopeful, but he’s trying to keep his expectations realistic. “I don’t want to be too optimistic, and I’m trying to be prepared for no results at all,” he says. “I hope that whatever they find from this research will at least benefit other people.” Shillcox will likely complete his training by the end of the year, and Harkema says she cannot yet publicly reveal their preliminary results. Whatever the medical benefits ultimately prove to be, working with Harkema as a pioneer on an experimental treatment for spinal-cord injury has boosted Shillcox’s confidence around others. “I have no problem asking for help now,” he says.

Source: IEEE Spectrum November 2013

.

The Tunneling Transistor

Our always-on world of PCs, tablets, and smartphones has come about because of one remarkable trend: the relentless miniaturization of the metal-oxide-semiconductor field-effect transistor, or MOSFET. This device, which is the building block of most integrated circuits, has shrunk a thousandfold over the past half century, from the tens-of-micrometers scale in the 1960s to tens of nanometers today. And as the MOSFET has become tinier, generation after generation, the chips based on it have become much faster and less power hungry than their predecessors.

This trend has given rise to one of the longest and greatest winning streaks in industrial history, bringing us gadgets, capabilities, and conveniences that previous generations could scarcely have imagined. But now this steady progress is under threat. And at the heart of the problem lies quantum mechanics.

The electron has a pesky ability to penetrate barriers—a phenomenon known as quantum tunneling. As chipmakers have squeezed ever more transistors onto a chip, transistors have gotten smaller, and the distances between different transistor regions have decreased. So today, electronic barriers that were once thick enough to block current are now so thin that electrons can barrel right through them.

Chipmakers have already stopped thinning one key transistor component—the gate oxide. This layer electrically separates the gate, which turns a transistor on and off, from the current-carrying channel. Make this oxide thinner and you can induce more charge in the channel, boost the current, and make the transistor faster. But you can’t reduce the oxide thickness to much less than roughly a nanometer, which is about where it is today. Beyond that, too much current will flow across the channel when the transistor is “off,” when ideally no current should flow at all. And that’s just one of several leakage points.

It has long been hard to pin down the precise year when size reductions will end. Industry road maps now project the miniaturization of the MOSFET out to 2026, when gates will be just 5.9 nanometers long—about a quarter the length they are today. This timeline assumes that we’ll be able to find better materials to stanch leaks. But even if we do, we’ll need to find a replacement for the MOSFET soon if we want to continue getting the performance enhancements we’re used to.

We can’t stop electrons from tunneling through thin barriers. But we can turn this phenomenon to our advantage. In the last few years, a new transistor design—the tunnel FET, or TFET—has been gaining momentum. Unlike the MOSFET, which works by raising or lowering an energy barrier to control the flow of current, the TFET keeps this energy barrier high. The device switches on and off by altering the likelihood that electrons on one side of that barrier will materialize on the other side.

Tunnel FETs Back Or Through
Back or Through: In classical electrodynamics, an electron [blue] would bounce back from an energy barrier [orange] if its energy did not exceed the barrier height. In fact, electrons have a finite probability of passing through the energy barrier. The thinner the barrier, the higher the probability that such a tunneling event might occur.

That’s a huge departure from the way traditional transistors work. But it might be just the thing to pick up where the MOSFET leaves off, paving the way for faster, denser, and more energy-efficient circuits that will extend Moore’s Law well into the next decade.

It wouldn’t be the first time the transistor has changed form. Initially, semiconductor-based computers used circuits made from bipolar transistors. But only a few years after the silicon MOSFET was demonstrated in 1960, engineers realized they could make two complementary switches. These could be combined to make complementary metal-oxide-semiconductor (CMOS) circuits that, unlike bipolar transistor logic, consumed energy only while switching. Ever since the first integrated circuits based on CMOS emerged in the early 1970s, the MOSFET has dominated the marketplace.

In many ways, the MOSFET wasn’t a big departure from the bipolar transistor. Both control the current flow by raising and lowering energy barriers—a bit like raising and lowering a floodgate in a river. The “water” in this case consists of two kinds of current carriers: the electron and the hole, a positively charged entity that’s essentially the absence of an electron in the outer energy shell of an atom in the material.

There are two allowable energy ranges, or bands, for these charge carriers. Electrons with enough energy to flow freely through the material are in the conduction band. Holes flow in a lower-energy band, called the valence band, and they move from atom to atom, much as an empty parking space might migrate around a nearly full parking lot as neighboring cars pull in and out.

These bands are fixed, but we can shift the energies associated with them up or down by adding impurities, or dopant atoms, to alter the conductivity of the semiconductor. N-type semiconductors, which are doped to contain an excess of electrons, conduct negatively charged electrons; p-type semiconductors, which are doped to produce a deficit of electrons, conduct positively charged holes.

If we put these two semiconductor types together, we get a junction with bands that are misaligned, thus creating an energy barrier between them. To make a MOSFET, we insert one type of material between two of the complementary type, in either an n-p-n or a p-n-p configuration. This creates three regions in the transistor: the source, where charges enter the device; the channel; and the drain, where they exit.

The two p-n junctions in each transistor provide electronic barriers to the flow of charges, and the transistor can be switched on by applying a voltage to the gate on top of the channel. A positive voltage applied to an n-channel MOSFET gate makes the channel more attractive to electrons, because it decreases the amount of energy an electron needs to have in order to move into the channel. A negative voltage applied to a p-channel MOSFET gate will perform the same task for holes.

This simple barrier-lowering strategy is the most widely used current-control mechanism in semiconductor electronics. Diodes, lasers, bipolar transistors, thyristors, and most field-effect transistors all take advantage of it. But it has a physical limit: Transistors need a certain amount of voltage to be switched on or off. This arises from the fact that electrons and holes are in constant motion due to their thermal energy, and the most energetic among them spill over the barrier. At room temperature, the current flowing over the barrier increases by a factor of 10 when the energy barrier is lowered by 60 millivolts; every “decade” of current change requires a change of 60 mV.

All this current leakage occurs below the device’s threshold voltage, which is the voltage needed for the transistor to turn on. Device physicists call this barrier-lowering region the subthreshold region, and 60 mV per decade is known as the minimum subthreshold swing. To keep power consumption down, subthreshold swing should be as low as possible. The device will then need less voltage to be switched on, and it will leak less current when it’s off.

Tunnel FETs Off and On 
Off and On: In an n-channel MOSFET, electrons move in the conduction band (Ec) from source to drain. The device’s state can be switched from off [top left] to on [top right] if enough voltage is applied to draw down the energy barrier between the two regions. In an n-channel TFET, electrons originate in the source’s valence band (Ev). A small voltage applied to the gate lowers the conduction band of the channel so it overlaps in energy with the source valence band, allowing electrons to tunnel into the channel.

Subthreshold swing wasn’t much of an issue in the past, in the days when chips ran with higher voltages. But now it’s starting to interfere with our ability to drive down power. That’s partly due to the fact that circuit designers want to make sure their logic components have a big difference between the current that is used to define a “0” and the current that defines a “1.” Transistors are typically designed so that when they’re on they carry about 10 000 times as much current as they leak when they’re off. That means at least 240 mV must be applied to the transistor to turn it on: four decades of current, each decade requiring at least 60 mV.

In practice, the operating voltages used in CMOS circuits are typically much higher, closer to 1 volt. That’s because the fundamental logic circuit in CMOS, the inverter, uses two transistors in series. The NAND gate takes three transistors in series, which means it requires even more voltage than the inverter. If you adjust for process variability—which means you need to set wider voltage margins to account for variations from device to device—you arrive at the voltages needed today to guarantee operation.

These voltage requirements, coupled with the leakage problems, mean we’re in the waning days of MOSFET miniaturization. There’s no way around it. If we want to lower voltage further to cut down on power consumption, we have two equally unattractive options. We can lower the current we drive through the device, which lowers the switching speed and thus sacrifices performance, or we can keep the current high and allow more current to leak through the device when it’s supposed to be off.

That’s where the tunnel FET comes in. Instead of raising or lowering the physical barrier between the source and drain as you would in a MOSFET, we use the gate to control the effective, electrical thickness of the barrier and thus the probability that electrons can slip through it.

Tunnel FETs Bridge And Tunnel 
Image: R. Li/ University of Notre Dame
Bridge and Tunnel: Pairing semiconductors made of different compounds can dramatically boost current. This TFET uses aluminum gallium antimonide to make the source and drain regions of the device. A comblike air bridge, made of indium arsenide, is used to connect the channel to the drain for better electrical isolation. Metal air bridges are also used to wire the source, gate, and drain.

The trick to doing this is again the p-njunction—with a bit of a twist. In a TFET, we arrange semiconducting material in p-i-n and n-i-pconfigurations. The i stands for “intrinsic,” and it means that the channel has as many electrons as there are holes. The intrinsic state corresponds to the maximum resistivity that a semiconductor can have. It also pushes up the energies associated with the bands in the channel, introducing a thick energy barrier that charge carriers in the source are unlikely to traverse.

Electrons and holes obey the laws of quantum mechanics, which means they have a fuzzy, uncertain size. When an energy barrier has a thickness below about 10 nanometers, there is a small but nonzero probability that an electron that starts on one side of the barrier will appear on the other.

In the TFET, we boost this probability by applying a voltage to the transistor gate. This causes the conduction band in the source and the valence band in the channel to overlap, opening up a tunneling window. Note that in a TFET, the electrons tunnel between conduction and valence bands as they move into the channel. This is in contrast to what happens in a MOSFET, in which electrons or holes travel primarily in either one band or the other all the way from source to channel to drain.

Because the tunneling mechanism isn’t controlled by the flow of carriers over a barrier, TFETs should be able to switch with a much smaller voltage swing than that required in a MOSFET. You have to apply only enough voltage to create or remove an overlap, crossing and uncrossing the bands [see bottom half of illustration “Off and On”, above].

As a device mechanism, tunneling is not a new idea. The flash memory inside our USB sticks, cellphones, and other gadgets uses tunneling to inject electrons across oxide barriers into charge-trapping regions. Tunnel junctions like the one used in the TFET are also widely used to connect multijunction solar cells and to trigger semiconductor-based quantum cascade lasers. And tunneling governs the way current flows across metal-semiconductor contacts, an essential part of every semiconductor device.

The p-n tunnel junction has also been around a while. It was first demonstrated and explained by Nobel Prize winner Leo Esaki in 1957. But it took a fundamental impediment to get the industry to think seriously about how tunneling might be applied to logic.

Tunnel FETs All around device 
Image: T. Mayer/Penn State
All-Around Device: Today’s cutting-edge transistors are three-dimensional, with gates that drape around three sides of a finlike channel. The TFET above employs a gate that wraps entirely around the channel. In this device, charge carriers move from left to right through source, channel, and drain regions made of indium gallium arsenide.

The first TFET papers were written only about nine years ago, when chipmakers started to see computer clock speeds stall and struggled with the problem of removing heat from denser, leakier chips.

Joerg Appenzeller and his colleagues at IBM were the first to demonstrate that current swings below the MOSFET’s 60-mV-per-decade limit were possible. In 2004, they reported they had created a tunnel transistor with a carbon nanotube channel and a subthreshold swing of just 40 mV per decade. Within a few years, groups at UC BerkeleyCEA-LetiImec, and Stanford had followed suit. They showed that switches that consume less than 60 mV per decade could be made using semiconducting materials that are staples of the chip industry: silicon and germanium.

That got the community excited, because although the current-control mechanism in the TFET is new to the semiconductor industry, the device bears a strong resemblance to the MOSFET. It has the same basic configuration of source, drain, and gate and similar electrical behavior when wired into circuits. The semiconductor design infrastructure does not need to change.

But some changes are required. It turns out that silicon and germanium aren’t great for tunneling. It’s for the same reason that these materials don’t make good light emitters and lasers. Silicon and germanium have indirect bandgaps, which means that in order to transition from one band to another, electrons must also absorb some extra energy from vibrations in the crystal lattice that makes up the material. This extra hurdle significantly lowers the probability that charge carriers will make the leap. As a result, the current-carrying capacity of silicon and germanium TFETs is only a trickle compared with that of today’s transistors.

That might be a stumbling block for adoption by the industry. However, there are a range of direct-bandgap materials, based on a mix of elements picked from columns III and V of the periodic table, that have considerably higher tunneling probabilities. These materials have yet to make it into mass production in logic chips, but work on incorporating them into traditional MOSFETs is already gearing up [see “Changing the Channel,” IEEE Spectrum, July 2013]. The notion that they might emerge in logic chips in the foreseeable future is not nearly as far-fetched as it would have seemed just a few years ago.

Research into TFETs made from III-V materials has also been advancing rapidly in recent years. Suman Datta and his colleagues at Pennsylvania State University were the first to demonstrate III-V TFETs, in 2009. They used channels made of a mix of indium, gallium, and arsenic and immediately set a record, with an “on” current that was 50 times as high as with the best germanium TFET.

Since then, the Penn State team and my group at the University of Notre Dame, in South Bend, Ind., have both shown even higher currents in TFETs made from a mix of two compounds: aluminum gallium antimonide and indium arsenide. The former material has bands that can be shifted up or down by tuning the ratio of aluminum to gallium. This lets us create tunnel junctions that have a natural overlap between bands, which means less voltage is needed to turn them on. And because the barrier can be quite thin—just a single atom or so wide—they permit more current. The devices we have built perform well at just 0.5 V and can carry nearly 200 microamperes across a 1-micrometer-wide channel, comparable to what can be accomplished with a state-of-the-art MOSFET.

The one caveat is the subthreshold swing of these “heterojunction” TFETs, which so far hasn’t been able to beat the 60-mV-per-decade limit for the MOSFET. Many research groups are now struggling with this challenge. The main culprit is defects—many of which arise from dangling chemical bonds—at the interface between the semiconductor and gate oxide. These defects trap and immobilize charges, leaving fewer charges available for conduction. This means we have to apply a greater voltage to the gate to induce charge carriers in the channel.

That said, there is reason for optimism. Groups based at Intel, in Hillsboro, Ore., and Hokkaido University, in Sapporo, Japan, have demonstrated III-V TFETs with subthreshold swings of less than 60 mV per decade. And simulations from Intel suggest that it’s possible to drive down subthreshold swing even further without monumental changes in materials, simply by scaling down the transistors they have already built. In principle, devices with subthreshold swings of around 20 mV per decade appear possible; the ultimate limit will be set by thermal vibrations in the crystal, which make the edges of the conduction and valence bands less sharp.

Tunnel FETs Layer on Layer 
Image: N. Goel/C. Park/SEMATECH/University of Notre Dame
Layer on Layer: The same etch-and-deposition processes used to make metal oxide gate stacks in today’s silicon chips can be used to make TFET transistor regions. This close-up view shows the region containing the source and channel of a TFET (the drain is at the right, out of view). The source and channel are made of oppositely doped indium gallium arsenide. The device is controlled by a gate made of titanium nitride, which is isolated from the channel by a layer of aluminum oxide.

Much as it would have been hard to predict the MOSFET’s ultimate capabilities 50 years ago, it’s difficult to say exactly what may ultimately be achieved with the TFET.

One uncertainty is the maximum current a TFET can carry when it’s on. On-current is what ultimately determines the maximum speed of circuits, and for a long time, researchers thought it might be fairly low. But in 2010, Siyu Koswatta at IBM showed in simulation that gallium antimonide and indium arsenide could potentially carry 1900 µA per micrometer of channel width when supplied with just 0.4 V. If such a device could be built, it would compete directly with the MOSFET in high-performance applications. The International Technology Roadmap for Semiconductors targets a current of 1685 µA per micrometer of channel width, at a voltage of 0.73 V.

We will also have to tackle the issue of current leakage when the TFET is in its off state. As the channel gets shorter and shorter, it will be easier for electrons to tunnel directly from the source into the drain.

Figuring out the ultimate limits of the device will depend on such factors as electronic structure, defects, and performance requirements. Fortunately, computational tools developed over the past five years at Purdue University and at ETH Zurich now allow researchers to simulate entire devices, atom by atom and bond by bond, to predict device behavior. This activity is helping to guide experiments.

While the TFET’s electrical characteristics look promising, there are also quite a few practical things we must tackle before we can start building chips with these transistors. Researchers have been focusing most of their energy on developing n-channel TFETs. P-channel TFETs—and a complementary process technology that could pair the two transistor types to make circuits—are still on the drawing board.

And chipmakers still have to find ways to address the problem of variability. As MOSFETs shrink, the placement and concentration of dopants and the roughness of interfaces can lead to significant variability in electronic properties. TFETs—which will likely be even smaller than MOSFETs when they’re introduced—won’t be immune to this problem. As with the MOSFET, we will have to develop other approaches in parallel, such as redundancy and error correction, to address this issue.

Still, I am optimistic that there are more promising results to come. It took just 10 years to get from the first silicon MOSFET to the first CMOS microprocessor. The jump to the TFET is arguably a much bigger challenge. But with more than half a century of experience with semiconductors under our belts, it might come about quicker than we think.

.