Tuesday, January 29, 2008

Graphene Transistors

Predicted electronic properties that have made researchers excited about a new material have now been demonstrated experimentally.

Speedy carbon: Thin ribbons of graphene (left) could be useful for future generations of ultra-high-speed processors (scale bar is 100 nanometers). Graphene is made of carbon atoms arranged in hexagons (right).
Credit: Hongjie Dai

A researcher at Stanford University has provided strong experimental evidence that ribbons of carbon atoms can be used for future generations of ultrafast processors.

Hongjie Dai, a professor of chemistry at Stanford, and his colleagues have demonstrated a new chemical process that produces extremely thin ribbons of a carbon-based material called graphene. He has demonstrated that these ribbons, once incorporated into transistors, show excellent electronic properties. Such properties have been predicted theoretically, Dai says, but not demonstrated in practice. These properties make graphene ribbons attractive for use in logic transistors in processors.

The discovery could lead to even greater interest in the experimental material, which has already attracted the attention of researchers at IBM, HP, and Intel. Graphene, which consists of carbon atoms arranged in a one-atom-thick sheet, is a component of graphite. Its structure is related to carbon nanotubes, another carbon-based material that's being studied for use in future generations of electronics. Both graphene and carbon nanotubes can transport electrons extremely quickly, which could allow very fast switching speeds in electronics. Graphene-based transistors, for example, could run at speeds a hundred to a thousand times faster than today's silicon transistors.

But graphene sheets have one significant disadvantage compared with the silicon used in today's chips. Although graphene can be switched between different states of electrical conductivity--the basic characteristic of semiconductor transistors--the difference between these states, called the on/off ratio, isn't very high. That means that unlike silicon, which can be switched off, graphene continues to conduct a lot of electrons even in its "off" state. A chip made of billions of such transistors would waste an enormous amount of energy and therefore be impractical.

Researchers had theorized, however, that it might be possible to dramatically improve these on/off ratios by carving graphene sheets into very narrow ribbons just a few nanometers wide. There had been early evidence supporting these theories from researchers at IBM and Columbia University, but the ratios produced were still much lower than those in silicon.

Dai decided to take a different approach to making thin graphene ribbons. Whereas others had used lithographic techniques to carve away carbon atoms, Dai turned to a solution-based approach. He starts with graphite flakes, which are made of stacked sheets of graphene. Then he chemically inserts sulfuric acid and nitric acid molecules between these flakes and rapidly heats them up, vaporizing the acids and forcing the graphene sheets apart. "It's like an explosion," Dai says. "The sheets go separate ways, and the graphite expands by 200 times."

Next, he suspends the now-separated sheets of graphene in a solution and exposes them to ultrasonic waves. These waves break the sheets into smaller pieces. Surprisingly, Dai says, the sheets fracture not into tiny flakes but into thin and very long ribbons. These ribbons vary in size and shape, but their edges are smooth--which is key to having consistent electronic properties. The thinnest of the ribbons are less than 10 nanometers wide and several micrometers long. "I had no idea that these things could be made with such dimensions and smoothness," Dai says.

When Dai made transistors out of these ribbons, he measured on/off ratios of more than 100,000 to 1, which is attractive for transistors in processors. Previously, room-temperature on/off ratios of graphene ribbons had been measured at about 30 to 1.

Still, many obstacles remain to making graphene processors using Dai's methods, says Walter de Heer, a physics professor at Georgia Tech. The ribbons made with Dai's process have to be sorted. Pieces that are too large or not in the shape of ribbons have to be weeded out. There also needs to be a way of arranging the ribbons into complex circuits.

However, researchers already have ideas about how to address these challenges. For example, graphene ribbons have more exposed bonds at their edges, so chemicals could be attached to these bonds that would direct the ribbons to bind to specific places to form complex circuits, de Heer says.

The best way to make graphene electronics, however, may be to take advantage of the fact that graphene can be grown in large sheets, says Peter Eklund, a professor of physics at Penn State. If better lithography methods are developed to pattern these sheets into narrow ribbons and circuits, this could provide a reliable way of making complex graphene-based electronics.

Ultimately, the most important aspect of Dai's work could be the fact that it has demonstrated electronic properties that were only theoretical before, Eklund says. And this could lead to even more interest in developing graphene for next-generation computers. "Once you get a whiff of narrow graphene ribbons with a high on/off ratio, this will tempt a lot of people to try to get in there and either make ribbons by high-technology lithographic processes, or try to improve the approach developed by Dai," says Eklund.


http://www.technologyreview.com/Nanotech/20119/

Looking into the Brain with Light

An Israeli company is working on a new device to monitor oxygen levels in the brain.

Mind reader: The OrNim targeted oximetry probe (above) adheres to the scalp to monitor the oxygen levels of specific areas of the brain.
Credit: OrNim

A new noninvasive diagnostic technology could give doctors the single most important sign of brain health: oxygen saturation. Made by an Israeli company called OrNim and slated for trials on patients in U.S. hospitals later this year, the technology, called targeted oximetry, could do what standard pulse oximeters can't.

A standard pulse oximeter is clipped onto a finger or an earlobe to measure oxygen levels under the skin. It works by transmitting a beam of light through blood vessels in order to measure the absorption of light by oxygenated and deoxygenated hemoglobin. The information allows physicians to know immediately if oxygen levels in the patient's blood are rising or falling.

Prior to the development of pulse oximeters, the only way to measure oxygen saturation was to take a blood sample from an artery and analyze it in a lab. By providing an immediate, noninvasive measure of oxygenation, pulse oximeters revolutionized anesthesia and other medical procedures.

While pulse oximeters have become accurate and reliable, they have a key limitation: they can't measure oxygen saturation in specific areas deep inside the body. Because pulse oximeters measure only the blood's overall oxygen levels, they have no way of monitoring oxygen saturation in a specific region. This is especially problematic in the case of brain injuries, since the brain's oxygenation can then differ from the rest of the body's.

Information on oxygenation in specific regions of the brain would be valuable to neurologists monitoring a brain-injured patient, as it could be used to search for localized hematomas and give immediate notice of hemorrhagic strokes. When a stroke occurs, an area of the brain is deprived of blood and thus oxygen, but there is no immediate way to detect the attack's occurrence.

CT and MRI scans give a snapshot of tissue damage, but they can't be used for continuous monitoring. It can also be very difficult to conduct such scans on unconscious patients hooked up to life-support devices.

Wade Smith, a neurologist at the University of California, San Francisco, and an advisor to OrNim, points out that, while cardiologists have devices to monitor hearts in detail, neurologists have no equivalent tool. With brain-injured patients, Smith says, "the state of the art is flying blind."

OrNim's new device uses a technique called ultrasonic light tagging to isolate and monitor an area of tissue the size of a sugar cube located between 1 and 2.5 centimeters under the skin. The probe, which rests on the scalp, contains three laser light sources of different wavelengths, a light detector, and an ultrasonic emitter.


The laser light diffuses through the skull and illuminates the tissue underneath it. The ultrasonic emitter sends highly directional pulses into the tissue. The pulses change the optical properties of the tissue in such a way that they modulate the laser light traveling through the tissue. In effect, the ultrasonic pulses "tag" a specific portion of tissue to be observed by the detector. Since the speed of the ultrasonic pulses is known, a specific depth can be selected for monitoring.

The modulated laser light is picked up by the detector and used to calculate the tissue's color. Since color is directly related to blood oxygen saturation (for example, arterial blood is bright red, while venous blood is dark red), it can be used to deduce the tissue's oxygen saturation. The measurement is absolute rather than relative, because color is an indicator of the spectral absorption of hemoglobin and is unaffected by the scalp.

Deeper areas could be illuminated with stronger laser beams, but light intensity has to be kept at levels that will not injure the skin. Given the technology's current practical depth of 2.5 centimeters, it is best suited for monitoring the upper layers of the brain. Smith suggests that the technology could be used to monitor specific clusters of blood vessels.

While the technology is designed to monitor a specific area, it could also be used to monitor an entire hemisphere of the brain. Measuring any area within the brain could yield better information about whole-brain oxygen saturation than a pulse oximeter elsewhere on the body would. Hilton Kaplan, a researcher at the University of Southern California's Medical Device Development Facility, says, "If this technology allows us to actually measure deep inside, then that's a big improvement over the limitations of decades of cutaneous versions."

Michal Balberg, the CEO and cofounder of OrNim, thinks that it may ultimately be feasible to deploy arrays of probes on the head to get a topographic map of brain oxygenation. In time, brain oxygenation may be considered a critical parameter that should be monitored routinely. Balberg says, "Our development is directed toward establishing a new brain vital sign that will be used to monitor any patient [who's] unconscious or under anesthesia. We believe that this will affect patient management in the coming decade in a manner comparable to pulse oximeters."

Michael Chorost covers medical devices for Technology Review. His book about cochlear implants, Rebuilt: How Becoming Part Computer Made Me More Human, was published in 2005.


http://www.technologyreview.com/Biotech/20123/

Voting with (Little) Confidence

Experts say that when it comes to voting machines, usability issues should be as much of a concern as security.

Touchy results: In a study of the usability of touch-screen voting machines, such as the Diebold AccuVote-TS, pictured above, participants made errors in voting as much as 3 percent of the time on the simplest tasks, and 15 percent of the time on more complicated tasks, such as changing a selection they had previously made. Although the error rates are relatively small, the researchers point out that they would matter in elections as close as those in recent years.
Credit: Ben Bederson, Human-Computer Interaction Lab, University of Maryland

Electronic voting systems--introduced en masse following high-profile problems with traditional voting systems in the state of Florida during the 2000 presidential election--were designed to quell fears about accuracy. Unfortunately, those concerns continue to permeate political conversation. The Emergency Assistance for Secure Elections Act of 2008, introduced recently by Rep. Rush Holt (D-NJ), proposes government funding for jurisdictions that use electronic voting to switch to systems that produce a paper trail. But many experts say that a paper trail alone can't solve the problem.

Ben Bederson, an associate professor at the Human-Computer Interaction Lab at the University of Maryland, was part of a team that conducted a five-year study on voting-machine technology. Bederson says that machines should be evaluated for qualities beyond security, including usability, reliability, accessibility, and ease of maintenance. For example, in a 2006 Florida congressional election, some voters were uncertain whether touch-screen machines had properly recorded their votes, especially after 18,000 ballots in Sarasota County were marked "No vote" by the machines. "Security, while important, happens to be one of those places where voting machines actually have not proven to fail," Bederson says. "However, in many other ways, they have failed dramatically, especially [regarding] usability. The original Florida problem was primarily a usability issue." (Among the problems in Florida in 2000 was the case of Palm Beach County, where some voters were confused by a ballot design that listed candidates in two columns. The confounding layout led some people to mistakenly vote for Patrick Buchanan when they intended to vote for Al Gore.) Bederson's team, which included researchers from the University of Maryland, the University of Rochester, and the University of Michigan, particularly focused on usability, and they evaluated electronic voting systems built by Diebold, Election Systems and Software, Avante Voting Systems, Hart InterCivic, and Nedap Election Systems, as well as one prototype built by Bederson himself.

In the study, participants were told to vote for particular candidates in mock elections. The researchers then compared the results recorded on the machines with the voters' intentions. Bederson says that even for the simplest task--voting in one presidential race on a single screen--participants had an error rate of around 3 percent. When the task became more complicated, such as when voters were asked to change their selection from one candidate to another, the error rate increased to between 7 and 15 percent, depending on the system. Bederson notes that, although the error rate that occurred in the study may not necessarily mean that there is the same error rate in terms of actual votes on actual machines, the study does raise concern, considering how close some recent elections have been. Bederson's group recorded one test vote in which the errors caused different candidates to win a race depending on which machine was used. "As to whether errors are biased, the answer in general is that it depends on the specific usability problem," Bederson says.


http://www.technologyreview.com/Infotech/20122/?a=f