Thursday, January 31, 2008

The Rise of Systemic Financial Risk

MIT's Andrew Lo describes how one rogue trader can impact global markets.
Market watcher: Andrew Lo, director of MIT’s Laboratory for Financial Engineering, says that the growing complexity of world markets makes it more likely that aberrations like the Societe Generale fraud will rock world markets.
Credit: MIT


Yesterday, a week after Societe Generale disclosed a $7.2 billion loss by a single rogue trader, Bank of France chairman Christian Noyer declared to a French senate finance committee, "None of the controls within Societe Generale seem to have worked as they should have."

But beyond the evident failure of internal control technologies lie wider vulnerabilities in the global financial system. It is possible that the deeds of 31-year-old Jerome Kerviel at Societe Generale triggered global stock sell-offs, says Andrew Lo, director of MIT's Laboratory for Financial Engineering. And that points to widening systemic risk in ever more complex financial markets.

Technology Review: First, what do we know about the failures of those Societe Generale controls?

Andrew Lo: They are still trying to piece together the different methods he used, but apparently, it was his intimate knowledge of Societe Generale's systems infrastructure that allowed him to circumvent various controls. From news reports, it appears he was able to access internal financial databases and not only alter the stated holdings of the accounts he was trading, but was also able to circumvent the checks and reconciliation processes that were put into place to make sure these were accurate. Apparently, the standard reconciliation processes did run, but he was able to alter the records both before and after these processes ran so as to avoid detection and maintain his portfolio.

TR: Can't we just build better software and other technologies to prevent a recurrence?

AL: Yes, but anytime there is an interface between technology and human behavior, you open yourself up to the potential for fraud. Systems don't build themselves: humans program them. A big event like this happens every so often, and then people say, "Gee, we have to spend more time and money to improve our systems," and the systems become safer. Once the systems become safer, we get lulled into a false sense of security and complacency. And eventually, we experience a rude awakening when the next disaster strikes. I would argue that it is impossible to prevent these disasters with 100 percent certainty.

TR: Okay, so bad things will happen. I take it you are mainly concerned about the ripple effect when they do?

AL: Exactly. The financial system as a whole is getting more complex. Financial institutions rely on ever more elaborate systems architecture and electronic communications across different counterparties and sectors. The number of parties involved, the nature of transactions, the volume of transactions as the market grows--taken together, the dynamics among these aspects of financial markets imply that the complexity is growing exponentially. No single human can comprehend that complexity. And as the system grows more complex, it is a well-known phenomenon that the probability of some kind of shock spreading through the system increases as well. Systemic shocks become more likely. Today, we are looking at some significant exposure to relatively rare events.

TR: In what way was the Societe Generale matter such a shock?

AL: One natural hypothesis is that the global sell-off that happened early last week was a direct outcome of Societe Generale's unwinding of these rogue trades. We don't have any conclusive evidence yet, but it's not an outlandish conjecture given the circumstances surrounding the massive fraud that was allegedly committed. According to Societe Generale, the problem was discovered on Saturday [January 19], and the firm began unwinding their portfolio at the first possible opportunity. If it turns out that this "unwind" was on the scale of a billion dollars or more, it is plausible that the unwind itself triggered the global sell-off--first in Asia, then in Europe, and then in the U.S.

TR: So one person, in this case Mr. Kerviel, can move the entire global financial system.

AL: It's a larger-scale version of what happened in August of 2007--in particular, August 7, 8, and 9. A large number of quantitative equity hedge funds lost money on those dates simultaneously, yet there is no market event that you can point to that can explain why these funds lost money at the same time. But looking at circumstantial evidence, we [at MIT] pieced together a story that one large quantitative equity fund decided to unwind its portfolio, for reasons we don't know for sure, but which we conjecture to be related to credit problems from the subprime mortgage market. Because the conjectured liquidation involved a big fund that needed to be liquidated quickly, this implies that the impact of the liquidation on other similarly positioned quantitative equity funds would be negative--and large. You get a snowball effect. Everybody is heading for the exit door at the same time, and you get a crash. But in August 2007, it was not a crash of the market as a whole, but of portfolios that are similarly structured to the fund that started the snowball.

TR: So how can we mitigate these kinds of wider risks?

AL: Probably the best way to reduce the impact of systemic shocks is to provide investors with some transparency as to their likelihood and severity, and let the investors decide how much risk to bear. This is probably best accomplished by creating a government organization like the National Transportation Safety Board, charged with the mandate of analyzing every financial blowup or crisis and producing publicly available reports that describe the nature of the crisis, the circumstances leading up to it, and proposed methods for avoiding such incidents in the future. In the same way that the NTSB has improved the safety of air travel by sifting through the wreckage of every airplane crash and publishing a detailed study of its findings and recommendations, a Capital Markets Safety Board would give investors more insight into the risks of any given investment. Over time, the aggregate information produced by the CMSB would shed additional light on the nature of systemic risks for the entire global financial system.


http://www.technologyreview.com/Biztech/20133/

Cheap Hydrogen

A new process uses sunlight and a nanostructured catalyst to inexpensively and efficiently generate hydrogen for fuel.

Solar gases: A parabolic trough can focus sunlight on nanostructured titania, improving the efficiency of a new system for generating hydrogen by splitting water.
Credit: John Guerra, Nanoptek


Nanoptek
, a startup based in Maynard, MA, has developed a new way to make hydrogen from water using solar energy. The company says that its process is cheap enough to compete with the cheapest approaches used now, which strip hydrogen from natural gas, and it has the further advantage of releasing no carbon dioxide.

Nanoptek, which has been developing the new technology in part with grants from NASA and the Department of Energy (DOE), recently completed its first venture-capital round, raising $4.7 million that it will use to install its first pilot plant. The technology uses titania, a cheap and abundant material, to capture energy from sunlight. The absorbed energy releases electrons, which split water to make hydrogen. Other researchers have used titania to split water in the past, but Nanoptek researchers found a way to modify titania to absorb more sunlight, which makes the process much cheaper and more efficient, says John Guerra, the company's founder and CEO.

Researchers have known since the 1970s that titania can catalyze reactions that split water. But while titania is a good material because it's cheap and doesn't degrade in water, it only absorbs ultraviolet light, which represents a small fraction of the energy in sunlight. Other researchers have tried to increase the amount of sunlight absorbed by pairing titania with dyes or dopants, but dyes aren't nearly as durable as titania, and dopants haven't produced efficient systems, says John Turner, who develops hydrogen generation technologies at the National Renewable Energy Laboratory (NREL), in Golden, CO.

Nanoptek's approach uses insights from the semiconductor industry to make titania absorb more sunlight. Guerra says that chip makers have long known that straining a material so that its atoms are slightly pressed together or pulled apart alters the material's electronic properties. He found that depositing a coating of titania on dome-like nanostructures caused the atoms to be pulled apart. "When you pull the atoms apart, less energy is required to knock the electrons out of orbit," he says. "That means you can use light with lower energy--which means visible light" rather than just ultraviolet light.

The strain on the atoms also affects the way that electrons move through the material. Too much strain, and the electrons tend to be reabsorbed by the material before they split water. Guerra says that the company has had to find a balance between absorbing more sunlight and allowing the electrons to move freely out of the material. Nanoptek has also developed cheaper ways to manufacture the nanostructured materials. Initially, the company used DVD manufacturing processes, but it has since moved on to a still-cheaper proprietary process.

NREL's John Turner says that Nanoptek's process is "very, very promising." And Harriet Kung, the acting director of the DOE's office of basic energy sciences, which has funded Nanoptek's work, says that the strained-titania approach is "one of the major exciting advances" since titania was first discovered to be a photocatalyst in the 1970s.

If it works as expected, the technology could help address one of the fundamental problems with using hydrogen as fuel. Hydrogen is attractive because it is light, and burning it only produces water. But today most hydrogen is made from natural gas, a process that releases considerable amounts of carbon dioxide. The other main option is electrolysis. But even if it's powered by clean energy, such as electricity from photovoltaics, electrolysis is inefficient and expensive. Guerra says using strained titania, and Nanoptek's inexpensive manufacturing process, makes the process cheap and efficient enough to compete with processes that create hydrogen from natural gas. What's more, Guerra says, the Nanoptek technology can be located closer to customers than large-scale natural-gas processes, which could significantly reduce transportation costs, thereby helping make the technology attractive. And if in the future carbon emissions are taxed or regulated, Nanoptek's carbon-free approach is another advantage.

Turner says that in addition to making hydrogen for fuel-cell vehicles, Nanoptek's process--if it is indeed efficient and inexpensive, as the company claims--could also be important for large-scale solar electricity. If solar is ever to be a dominant source of power, finding ways of storing the energy for night use will be essential. And hydrogen, he says, could be a good way to store it.


http://www.technologyreview.com/Energy/20134/

Programming Advanced Materials

Ordered nano order: Sequences of DNA attached to gold nanoparticles (upper image) program the particles’ self-assembly into novel crystals (lower image). X-ray diffraction confirms the crystals--partly squashed by the electron microscopy that produced these images--to be perfect lattices of tens of thousands of particles.
Credit: Oleg Gang


In 1996, scientists at IBM and Northwestern University used single-stranded DNA as if it were molecular Velcro to program the self-assembly of nanoparticles into simple structures. The work helped launch the then-nascent nanotechnology field by suggesting the possibility of building novel materials from the bottom up. Twelve years later, researchers from Northwestern and Brookhaven National Laboratory report separately in the journal Nature that they have finally delivered on that promise, using DNA linkers to transform nanoparticles into perfect crystals containing up to one million particles.

"The crystal structures are deliberately designed," says Northwestern's Chad Mirkin, one of the materials scientists who pioneered DNA linking in the 1990s and a coauthor of one of today's reports. "This is a new way of making things."

Ohio State University physicist David Stroud calls the work "quite valuable." He predicts that the breakthrough will enable the assembly of new materials with novel optical, electronic, or magnetic properties that have, until now, existed only in the minds and models of materials scientists. "Even now I'm surprised they could do it," says Stroud.

To date, efforts at programmed nanoparticle self-assembly in three dimensions have produced mostly disordered clumps. These clumps can have value; indeed, Mirkin's startup company NanoSphere has used the technology to develop medical diagnostics that have gained approval from the Food and Drug Administration.

But more complex and exotic materials imagined by Stroud and others require ordered structures. The hang-up, says Stroud, is that nanoparticles are immense relative to the atoms that form most crystals. As a result, the nanoparticles move relatively slowly, especially with DNA strands attached. When cooled to allow the complementary strands of DNA to link up, the nanoparticles tend to get frozen into a disordered arrangement before they can find their way to the orderly lattice of a crystal.

The authors of the new reports--a team at Northwestern led by Mirkin and chemist George Schatz, and physicist Oleg Gang's team in Brookhaven National Laboratory's functional materials center, in Upton, NY--overcame the particles' sluggishness by using longer DNA strands that give the particles more flexibility during crystal formation. "Typically, we think that crystallinity requires very rigid structures, so one could imagine it's necessary to have a very rigid DNA shell on the particles to have good crystals," says Gang. "In reality, it's the opposite."

While the details of the Northwestern and Brookhaven systems differ, both pad out their DNA strands with sequences that act as spacers and flexors, in addition to complementary sequences on the DNA ends that bind particles together. The groups start by binding one of two types of DNA to gold nanoparticles. The DNA types are complementary to each other. These two pools of modified particles are then mixed and cooled. DNA strands with complementary DNA form a double helix, tying together their respective nanoparticles, while identical DNA strands act like springs to repel their respective particles. The spacers on each DNA strand, meanwhile, allow bound particles to twist and bend so each particle in the mix can bind the largest number of complementary particles.

The result is exactly what theory predicts: a crystal lattice in which each particle of one type is surrounded by eight of the others marking the corners of a cube. Mirkin's group further demonstrated that tweaking the temperature and DNA sequences could nudge the same mix of particles to form a distinct crystal structure in which each particle has 12 neighbors.

Mirkin says that he and his team are just getting started. "To me, it's really only the start rather than the ending," he says. Over the past three years, Mirkin's group has been demonstrating methods to place different DNA linkers on different faces of nonspherical particles, such as triangle-faced prisms and virus particles. That, he says, should enable programming of more complex materials with repeating patterns of three or more components. "The really intriguing possibility here is the ability to program the formation of any structure you want," says Mirkin.

Stroud says that the structures already produced will be useful as the DNA-programmed assembly is extended to particles other than gold. Applications could include photonic crystals, in which the precise periodicity of particles can tune the overall materials to manipulate specific wavelengths of light, and photovoltaics that capture a broader range of the solar spectrum.

The structures are highly porous--10 percent particles and DNA and 90 percent water. That could hinder applications in which water is undesirable. Drain out the water, and the crystals collapse. Gang says that one could stabilize the crystals by filling the lattice with a polymer, but he is also exploring alternate stabilization schemes that would preserve the lattice's open space.


http://www.technologyreview.com/Nanotech/20137/

Wednesday, January 30, 2008

Philips feminizes the LCD TV

With its 'Design Collection' of LCD TV's the Dutch electronics giant is cottoning on to the fact that style sells. It is actually focusing some of its design efforts around market research that suggests that for 98% of female consumers style is an important factor in the buying process.

Andrea Ragnetti, Philips' consumer lifestyle guru is at the forefront of a growing trend within large consumer focused organisations who have recognized the increasing importance of female buying power.

Three new LCD TV series' the 7000, 5000 and 3000 certainly lean towards a more 'femmine' style without being overtly directed towards the fairer sex. The new panels however have substance to back up the style with

The top of the range 1080p 7000 series comes equipped with 120Hz processing, an extremely rapid 2 millisecond response time along with an impressive 4 HDMI (v1.3) inputs.

The new panels will feature Philips' proprietary HD Digital Natural Motion technology (HD DNM) which has been designed to reduce on-screen juddering, while Motion Estimation Motion Compensation (MEMC) acts to further smooth motion by inserting compensating frames within faster scenes.

With a growing reputation for producing some of the most technologically advanced LCD TV's around, Philips' like other manufacturers have begun to realize the importance of psychological aspects and how they affect their bottom line.

http://hdtvorg.co.uk/news/articles/2008013002.htm

The demise of HD DVD?

Without wanting to sound too premature about the possible demise of the HD DVD High Definition format, more bad news continues to suggest that its rival Blu-ray may well emerge as the dominant format.

This week, the market research group Gartner indicated that it believed that the format war would be resolved in 2008 with Blu-ray emerging as the victor. It also suggested that Toshiba's strategy of cutting HD DVD player prices would simply delay the inevitable.

HD DVD was dealt what many believe to be a fatal blow earlier this month with the announcement from Warner Brothers that it would no longer be producing HD DVD versions of its films. With around 20% of all DVD sales, the big Hollywood studio's decision was certainly a huge set back for the HD DVD format.

More bad news for HD DVD has emerged this week in the UK with Woolworths announcing that it would no longer be offering HD DVD films in its stores after March this year. The stores decision is based on Blu-ray outselling its rival 10:1 over the Christmas period, with a company spokesperson pointing to the fact that the 750,000 (Blu-ray playing) PS3's in the UK give the format a significantly larger user base than HD DVD.

If a single format emerges, it will be good news for consumers who up until this point have been reluctant to commit to High Definition DVD, and for the industry in general which will surely see an explosion in public interest. Barry Meyer, chairman of Warner Bros had previously warned that "the window of opportunity for high-definition DVD could be missed if format confusion continues to linger,"

http://hdtvorg.co.uk/news/articles/2008013001.htm

Ultra Slim LCD TV's from Sharp

At a mere 3.44 cm wide, three new Aquos LCD TV's from Sharp are destined to become the slimmest screens commercially available, for a while at least.

The 37in, 42in and 46in screens represent a trend in demand for larger slimmer panels which owe as much to consumer demand for high style as for the latest technological wizardry. This is not to say that the new panels from Sharp are not very well specified.

The AQUOS X Series feature Full HD (1920 x 1080) resolution panels, 120Hz processing, thin-profile 3-way 8-speaker system and 1-Bit digital amplifier along with 3 HDMI (v1.3) inputs.

Available to Japanese consumers this March it looks like that the X series will be available in the UK some time this year. No dates or prices available as yet.

http://hdtvorg.co.uk/news/articles/2008012901.htm

Detecting Asthma Irritants

A portable sensor array measures air quality to discern the causes of asthma attacks.

Detecting irritants: Mark Jones (above) is the lead engineer for the development of a sensor system (below) that measures five types of chemicals known to cause asthma attacks. The device, approximately the size of a large cell phone, will be used to continuously monitor a person’s exposure levels to find the cause of attacks.
Credit: Courtesy of Kitty Ray Swain


Researchers at Georgia Tech Research Institute (GTRI) in Atlanta have developed a portable sensor system to monitor the air quality for people suffering from asthma. The device is a combination of sensors that measure the level of chemicals in the air thought to cause asthma attacks, such as ozone, volatile organic compounds, and formaldehyde. It is lightweight and small enough to fit into a patient's pocket, so exposure levels can be continuously monitored.

The only way that we are going to understand how environmental factors affect asthma is if we can measure a person's exposures on a day-to-day basis, says Charlene Bayer, the leader of the Environmental Exposures and Analysis Group at GTRI and the sensor system's principal investigator. "To do so, we need a device like this that can hold numerous sensors in a small, portable package.".

An estimated 20 million Americans suffer from asthma, according to the National Institutes of Health (NIH), and identifying the triggers of an attack is currently a guessing game. "There are a few devices on the market that measure one or two chemicals, but they are stationary and the size of a desktop computer," says Mark Jones, the chief executive officer of Keehi Technologies and the lead engineer developing the sensor system.

Currently, the only way to control an asthma attack is with medication, or "trigger avoidance." In 2007, the total health-care costs of asthma in the United States were approximately $19.7 billion, according to the NIH.

"Research has shown that if you can reduce the triggering of an asthma attack, you will reduce the impact of the disease," says Mark Millard, the director of the Baylor Martha Foster Lung Care Center at Baylor University Medical Center in Dallas, TX. The new sensor system, he says, is really trying to answer the question, "What are the triggers for people with asthma?"

The device is about the size of a cell phone and contains a total of five sensors that measure different possible asthma triggers: ozone, nitrogen dioxide, formaldehyde, carbon dioxide, and total volatile organic compounds--the brew of chemicals that are emitted as gases from products such as paints, cleaning supplies, and building materials. The device also includes temperature and humidity sensors and a clock, to put a time stamp on the measurements. The researchers used sensors already on the market and kept the device small by outfitting the sensors on a two-sided circuit board.

Establishing a timeline is important for late-phase reactions, says Millard, since reactions to compounds such as formaldehyde may happen four to six hours after a patient is exposed. "Now we can look at the data and know that a patient was exposed to a lot of those compounds and that could be the trigger."

To measure the air quality, a small motor in the device sucks in air through an intake hose. Before the air passes over the sensors, it encounters a small filter that removes particulates, such as dust and pollen. The mass of the filter is measured before and after a sampling period to determine the total amount of particles. The air is then evenly distributed over the sensors.

"It takes about 30 seconds for the air to pass through the device and the data to be stored, and then it goes to sleep for another minute. In one hour it takes approximately 50 or 60 samples," says Jones.

The device can be worn for up to 24 hours before the particle filter needs to be replaced and the memory on the device is full. The data can be downloaded from the sensor system onto a computer.

Millard says the device is unique and innovative, but that he would like to see its capabilities expanded to measure tobacco smoke. He would also like to be able to separate out the particle measurements so they can be measured in real time--an upgrade that Bayer says will be introduced once the device is commercialized. Bayer would also like to get more specific readings on the different volatile organic compounds.

"We would like to get to the point where we can pop certain sensors in and out so a patient can target it towards their particular needs," says Bayer. "Asthma is a very complicated disease and there are a number of different airborne exposures that can exacerbate an asthma attack. This technology will allow us to find the source of exacerbation and understand the health impacts," she says.

The researchers at GTRI are currently in talks with an undisclosed company to commercialize the device, says Bayer. The initial target users will be asthma patients but the device will be open for use by others who want to study environmental exposures.


http://www.technologyreview.com/Biotech/20131/

Genetic Variant Predicts Heart Disease Risk

A newly identified risk factor for heart disease also seems to indicate which patients will benefit from popular statin therapies.

Heartsick: There have been many false leads in identifying risk genes for heart disease, so the burden of proof for those studies should be much higher than usually required, some experts say.
Credit: Technology Review


Testing for a genetic variation could predict the likelihood that a patient will respond well to certain statins. But some researchers say it's too soon to use the variation to determine treatment.

Researchers from Celera reported yesterday in the Journal of the American College of Cardiology that a single substitution in the sequence of a gene called KIF6 makes people both more susceptible to heart attacks and more responsive to certain drugs that lower cholesterol. Though there is no known biological explanation linking the variation to heart disease, the study found that it increases the risk of heart attacks and strokes by 55 percent.

Celera, the company best known for sequencing the human genome, examined 35 single-nucleotide polymorphisms (SNPs) in 30,000 patients. Of those, "KIF6 is by far the most significant," says Thomas J. White, chief scientific officer at Celera. In fact, nearly 60 percent of the study population was found to carry the KIF6 variant. (According to the study, these findings take into account other factors, such as smoking, high blood pressure, and cholesterol levels.)

The researchers also found that carriers of the KIF6 variant responded better to the cholesterol-lowering drugs pravastatin (Pravachol) and atorvastatin (Lipitor). For example, among patients with the genetic variation, those who took pravastatin were 37 percent less likely to experience a heart attack than those who took the placebo. Those without the genetic variation who took the drug were only 14 percent less likely to experience a heart attack than those who took the placebo. Statins are big sellers for the pharmaceutical industry. In 2006, Lipitor, the world's best-selling drug, brought in $13 billion in global sales.

"This is one of the first studies to show an interaction with therapy" and genotype, says Marc Sabatine, professor of medicine at Harvard Medical School and a coauthor on one of the papers. "That is very exciting to see."

Surprisingly, the researchers found that KIF6 doesn't appear to work by lowering levels of LDL or "bad" cholesterol, the standard by which drugs used to prevent heart attacks are normally measured. White says that KIF6 may instead act by stabilizing "vulnerable plaques," which are particularly prone to triggering heart attacks.

Celera is developing a diagnostic that would test for the KIF6 variant and expects to launch it in a few months.

But some experts caution that it may be premature to introduce such diagnostic tests before there is further confirmation of KIF6's role in heart disease.

"Even if there are beneficial results, the standard should be that you need to document that knowing the genetic information is clinically useful," says Sekar Kathiresan, director of preventive cardiology at Massachusetts General Hospital.

Coronary heart disease caused one of every five deaths in the United States in 2006, so scientists have for quite some time been on the hunt for genes linked to heart attacks.

Rapid advances in technology have made that task much easier. At the same time, many of the genetic links to heart disease identified so far haven't held up on further analysis. At present, the only credible link is to a variant of the gene 9p21, identified last year by the Icelandic company deCODE Genetics, says Kathiresan. DeCODE offers a $200 diagnostic test for the 9p21 variant. (See "Gene Variant Linked to Heart Disease.")

A second gene, PCSK9, also looks promising, Kathiresan adds. "Nearly everything else is in the realm of 'possible but not definite.'"

It's good that KIF6 has been identified as a potential risk factor in several different studies, Kathiresan says. In each of the studies, he notes, there is less than a one-in-20 probability that the finding is a result of chance, which is generally considered an acceptable threshold for statistical significance.

But because of the high possibility of false positives, the threshold for genome-wide association studies should be much higher, on the order of one in 20 million, Kathiresan says. Both the 9p21 and the PCSK9 pass that test, he says.

"The key issue here is we don't know if these [KIF6 studies] are real results," Kathiresan says. "You need to show that it is clinically useful, and they have not crossed that threshold."


http://www.technologyreview.com/Biotech/20130/

Smart Badges Track Human Behavior

MIT researchers used conference badges to collect data on people's interactions and visualize the social network.
Social sense: MIT researchers tracked people’s social interactions at a conference using a smart badge (top) that incorporated an infrared sensor, wireless radio, accelerometer, and microphone to log people’s behaviors. The result was a social network (bottom), produced in real time, which showed who had spoken to whom during the course of the event.
Credit: Ben Waber


In the corporate and academic worlds, conferences and networking events are necessary. But while some people trade business cards with aplomb, others clump with coworkers, rarely venturing beyond the safety of their pre-existing social circle. New research from MIT's Media Lab has shown that a sensor-laden conference badge might be able to help people venture out, form new connections, and gain insight into how they interact with others at such events.

Ben Waber, an MIT researcher who worked on the project (and blogged about it here), gave souped-up badges to 70 participants at a Media Lab event. These badges use an infrared sensor to gather data about face-to-face interactions, a wireless radio to collect data regarding proximity to other badges and send it to a central computer, an accelerometer to track motion of the participant, and a microphone to monitor speech patterns. At the event, the data from the infrared sensors was wirelessly transmitted to a computer that crunched the numbers, producing a real-time visualization of the event's social graph.

This project illustrates the increasing popularity of sociometrics, a discipline in which sensors collect fine-grained data during social interactions and software makes sense of it. Waber works with MIT professor Sandy Pentland, who is credited with much of the early work in sociometrics and coining the term "reality mining." (See "What Your Cell Phone Knows About You" and "The iPhone's Untapped Potential.") But Waber and Pentland aren't alone. Researchers at Intel are using sensors to help monitor the health and behavior of the elderly. And others are using position data gleaned from cell phones to help develop more-comprehensive models of how disease spreads.

In addition, an MIT spin-off company, nTag, provides smart badges similar to Waber's that automatically send out and receive "e-cards." While nTag's badges don't collect motion and voice data, they are capable displaying data as real-time visualizations of the social network at a conference, says Rick Borovoy, cofounder and chief technology office of the company.

Borovoy says that revealing a social network, in particular, can change the dynamics at a conference. "It creates a sense of community and identity, and it's a way to subtly intervene and disrupt conventional networking patterns," he says. Borovoy says that nTag has found that showing people their networking patterns on a social graph is enough to change them. "You think people know their patterns, but often they don't," he says.

Waber says that the smart badges used in his experiment, which are about the size of a deck of cards but weigh less, can do more than just show face-to-face interactions and display a real-time social graph, and he has plans to look at the rest of the data to see what patterns emerge. For example, since the wireless radio can sense proximity and voice data, it's possible to infer when a person is engaged in a group discussion and who the expert is.

Also, accelerometer data could indicate activity at the conference. Waber says that if a conference organizer is running around, it could indicate that he needs help getting things done. This could indicate that the organizer should plan for more help at certain times during an event.

Some experts suspect that, within the next few years, smart badges won't be confined to conferences and events. "We think that eventually everyone will have a smart badge with them all the time: their cell phone," says Alex Kass, a researcher who leads reality-mining projects at Accenture, a technology firm. "Cell phones will transmit some kind of identity or interesting information to the people around you; y


http://www.technologyreview.com/Infotech/20129/?a=f

Tuesday, January 29, 2008

Graphene Transistors

Predicted electronic properties that have made researchers excited about a new material have now been demonstrated experimentally.

Speedy carbon: Thin ribbons of graphene (left) could be useful for future generations of ultra-high-speed processors (scale bar is 100 nanometers). Graphene is made of carbon atoms arranged in hexagons (right).
Credit: Hongjie Dai

A researcher at Stanford University has provided strong experimental evidence that ribbons of carbon atoms can be used for future generations of ultrafast processors.

Hongjie Dai, a professor of chemistry at Stanford, and his colleagues have demonstrated a new chemical process that produces extremely thin ribbons of a carbon-based material called graphene. He has demonstrated that these ribbons, once incorporated into transistors, show excellent electronic properties. Such properties have been predicted theoretically, Dai says, but not demonstrated in practice. These properties make graphene ribbons attractive for use in logic transistors in processors.

The discovery could lead to even greater interest in the experimental material, which has already attracted the attention of researchers at IBM, HP, and Intel. Graphene, which consists of carbon atoms arranged in a one-atom-thick sheet, is a component of graphite. Its structure is related to carbon nanotubes, another carbon-based material that's being studied for use in future generations of electronics. Both graphene and carbon nanotubes can transport electrons extremely quickly, which could allow very fast switching speeds in electronics. Graphene-based transistors, for example, could run at speeds a hundred to a thousand times faster than today's silicon transistors.

But graphene sheets have one significant disadvantage compared with the silicon used in today's chips. Although graphene can be switched between different states of electrical conductivity--the basic characteristic of semiconductor transistors--the difference between these states, called the on/off ratio, isn't very high. That means that unlike silicon, which can be switched off, graphene continues to conduct a lot of electrons even in its "off" state. A chip made of billions of such transistors would waste an enormous amount of energy and therefore be impractical.

Researchers had theorized, however, that it might be possible to dramatically improve these on/off ratios by carving graphene sheets into very narrow ribbons just a few nanometers wide. There had been early evidence supporting these theories from researchers at IBM and Columbia University, but the ratios produced were still much lower than those in silicon.

Dai decided to take a different approach to making thin graphene ribbons. Whereas others had used lithographic techniques to carve away carbon atoms, Dai turned to a solution-based approach. He starts with graphite flakes, which are made of stacked sheets of graphene. Then he chemically inserts sulfuric acid and nitric acid molecules between these flakes and rapidly heats them up, vaporizing the acids and forcing the graphene sheets apart. "It's like an explosion," Dai says. "The sheets go separate ways, and the graphite expands by 200 times."

Next, he suspends the now-separated sheets of graphene in a solution and exposes them to ultrasonic waves. These waves break the sheets into smaller pieces. Surprisingly, Dai says, the sheets fracture not into tiny flakes but into thin and very long ribbons. These ribbons vary in size and shape, but their edges are smooth--which is key to having consistent electronic properties. The thinnest of the ribbons are less than 10 nanometers wide and several micrometers long. "I had no idea that these things could be made with such dimensions and smoothness," Dai says.

When Dai made transistors out of these ribbons, he measured on/off ratios of more than 100,000 to 1, which is attractive for transistors in processors. Previously, room-temperature on/off ratios of graphene ribbons had been measured at about 30 to 1.

Still, many obstacles remain to making graphene processors using Dai's methods, says Walter de Heer, a physics professor at Georgia Tech. The ribbons made with Dai's process have to be sorted. Pieces that are too large or not in the shape of ribbons have to be weeded out. There also needs to be a way of arranging the ribbons into complex circuits.

However, researchers already have ideas about how to address these challenges. For example, graphene ribbons have more exposed bonds at their edges, so chemicals could be attached to these bonds that would direct the ribbons to bind to specific places to form complex circuits, de Heer says.

The best way to make graphene electronics, however, may be to take advantage of the fact that graphene can be grown in large sheets, says Peter Eklund, a professor of physics at Penn State. If better lithography methods are developed to pattern these sheets into narrow ribbons and circuits, this could provide a reliable way of making complex graphene-based electronics.

Ultimately, the most important aspect of Dai's work could be the fact that it has demonstrated electronic properties that were only theoretical before, Eklund says. And this could lead to even more interest in developing graphene for next-generation computers. "Once you get a whiff of narrow graphene ribbons with a high on/off ratio, this will tempt a lot of people to try to get in there and either make ribbons by high-technology lithographic processes, or try to improve the approach developed by Dai," says Eklund.


http://www.technologyreview.com/Nanotech/20119/

Looking into the Brain with Light

An Israeli company is working on a new device to monitor oxygen levels in the brain.

Mind reader: The OrNim targeted oximetry probe (above) adheres to the scalp to monitor the oxygen levels of specific areas of the brain.
Credit: OrNim

A new noninvasive diagnostic technology could give doctors the single most important sign of brain health: oxygen saturation. Made by an Israeli company called OrNim and slated for trials on patients in U.S. hospitals later this year, the technology, called targeted oximetry, could do what standard pulse oximeters can't.

A standard pulse oximeter is clipped onto a finger or an earlobe to measure oxygen levels under the skin. It works by transmitting a beam of light through blood vessels in order to measure the absorption of light by oxygenated and deoxygenated hemoglobin. The information allows physicians to know immediately if oxygen levels in the patient's blood are rising or falling.

Prior to the development of pulse oximeters, the only way to measure oxygen saturation was to take a blood sample from an artery and analyze it in a lab. By providing an immediate, noninvasive measure of oxygenation, pulse oximeters revolutionized anesthesia and other medical procedures.

While pulse oximeters have become accurate and reliable, they have a key limitation: they can't measure oxygen saturation in specific areas deep inside the body. Because pulse oximeters measure only the blood's overall oxygen levels, they have no way of monitoring oxygen saturation in a specific region. This is especially problematic in the case of brain injuries, since the brain's oxygenation can then differ from the rest of the body's.

Information on oxygenation in specific regions of the brain would be valuable to neurologists monitoring a brain-injured patient, as it could be used to search for localized hematomas and give immediate notice of hemorrhagic strokes. When a stroke occurs, an area of the brain is deprived of blood and thus oxygen, but there is no immediate way to detect the attack's occurrence.

CT and MRI scans give a snapshot of tissue damage, but they can't be used for continuous monitoring. It can also be very difficult to conduct such scans on unconscious patients hooked up to life-support devices.

Wade Smith, a neurologist at the University of California, San Francisco, and an advisor to OrNim, points out that, while cardiologists have devices to monitor hearts in detail, neurologists have no equivalent tool. With brain-injured patients, Smith says, "the state of the art is flying blind."

OrNim's new device uses a technique called ultrasonic light tagging to isolate and monitor an area of tissue the size of a sugar cube located between 1 and 2.5 centimeters under the skin. The probe, which rests on the scalp, contains three laser light sources of different wavelengths, a light detector, and an ultrasonic emitter.


The laser light diffuses through the skull and illuminates the tissue underneath it. The ultrasonic emitter sends highly directional pulses into the tissue. The pulses change the optical properties of the tissue in such a way that they modulate the laser light traveling through the tissue. In effect, the ultrasonic pulses "tag" a specific portion of tissue to be observed by the detector. Since the speed of the ultrasonic pulses is known, a specific depth can be selected for monitoring.

The modulated laser light is picked up by the detector and used to calculate the tissue's color. Since color is directly related to blood oxygen saturation (for example, arterial blood is bright red, while venous blood is dark red), it can be used to deduce the tissue's oxygen saturation. The measurement is absolute rather than relative, because color is an indicator of the spectral absorption of hemoglobin and is unaffected by the scalp.

Deeper areas could be illuminated with stronger laser beams, but light intensity has to be kept at levels that will not injure the skin. Given the technology's current practical depth of 2.5 centimeters, it is best suited for monitoring the upper layers of the brain. Smith suggests that the technology could be used to monitor specific clusters of blood vessels.

While the technology is designed to monitor a specific area, it could also be used to monitor an entire hemisphere of the brain. Measuring any area within the brain could yield better information about whole-brain oxygen saturation than a pulse oximeter elsewhere on the body would. Hilton Kaplan, a researcher at the University of Southern California's Medical Device Development Facility, says, "If this technology allows us to actually measure deep inside, then that's a big improvement over the limitations of decades of cutaneous versions."

Michal Balberg, the CEO and cofounder of OrNim, thinks that it may ultimately be feasible to deploy arrays of probes on the head to get a topographic map of brain oxygenation. In time, brain oxygenation may be considered a critical parameter that should be monitored routinely. Balberg says, "Our development is directed toward establishing a new brain vital sign that will be used to monitor any patient [who's] unconscious or under anesthesia. We believe that this will affect patient management in the coming decade in a manner comparable to pulse oximeters."

Michael Chorost covers medical devices for Technology Review. His book about cochlear implants, Rebuilt: How Becoming Part Computer Made Me More Human, was published in 2005.


http://www.technologyreview.com/Biotech/20123/

Voting with (Little) Confidence

Experts say that when it comes to voting machines, usability issues should be as much of a concern as security.

Touchy results: In a study of the usability of touch-screen voting machines, such as the Diebold AccuVote-TS, pictured above, participants made errors in voting as much as 3 percent of the time on the simplest tasks, and 15 percent of the time on more complicated tasks, such as changing a selection they had previously made. Although the error rates are relatively small, the researchers point out that they would matter in elections as close as those in recent years.
Credit: Ben Bederson, Human-Computer Interaction Lab, University of Maryland

Electronic voting systems--introduced en masse following high-profile problems with traditional voting systems in the state of Florida during the 2000 presidential election--were designed to quell fears about accuracy. Unfortunately, those concerns continue to permeate political conversation. The Emergency Assistance for Secure Elections Act of 2008, introduced recently by Rep. Rush Holt (D-NJ), proposes government funding for jurisdictions that use electronic voting to switch to systems that produce a paper trail. But many experts say that a paper trail alone can't solve the problem.

Ben Bederson, an associate professor at the Human-Computer Interaction Lab at the University of Maryland, was part of a team that conducted a five-year study on voting-machine technology. Bederson says that machines should be evaluated for qualities beyond security, including usability, reliability, accessibility, and ease of maintenance. For example, in a 2006 Florida congressional election, some voters were uncertain whether touch-screen machines had properly recorded their votes, especially after 18,000 ballots in Sarasota County were marked "No vote" by the machines. "Security, while important, happens to be one of those places where voting machines actually have not proven to fail," Bederson says. "However, in many other ways, they have failed dramatically, especially [regarding] usability. The original Florida problem was primarily a usability issue." (Among the problems in Florida in 2000 was the case of Palm Beach County, where some voters were confused by a ballot design that listed candidates in two columns. The confounding layout led some people to mistakenly vote for Patrick Buchanan when they intended to vote for Al Gore.) Bederson's team, which included researchers from the University of Maryland, the University of Rochester, and the University of Michigan, particularly focused on usability, and they evaluated electronic voting systems built by Diebold, Election Systems and Software, Avante Voting Systems, Hart InterCivic, and Nedap Election Systems, as well as one prototype built by Bederson himself.

In the study, participants were told to vote for particular candidates in mock elections. The researchers then compared the results recorded on the machines with the voters' intentions. Bederson says that even for the simplest task--voting in one presidential race on a single screen--participants had an error rate of around 3 percent. When the task became more complicated, such as when voters were asked to change their selection from one candidate to another, the error rate increased to between 7 and 15 percent, depending on the system. Bederson notes that, although the error rate that occurred in the study may not necessarily mean that there is the same error rate in terms of actual votes on actual machines, the study does raise concern, considering how close some recent elections have been. Bederson's group recorded one test vote in which the errors caused different candidates to win a race depending on which machine was used. "As to whether errors are biased, the answer in general is that it depends on the specific usability problem," Bederson says.


http://www.technologyreview.com/Infotech/20122/?a=f

Monday, January 28, 2008

Gene Therapy for Chronic Pain

Researchers use gene therapy to stop pain signals before they reach the brain.

The pain gate: When we suffer pain--whether from a stubbed toe or a metastasized tumor--pain signals are transmitted to the brain from around the body through these groups of sensory neurons, called dorsal root ganglia (DRG). A new gene-therapy technique intercepts pain signals at the DRG using a gene for a naturally produced opiate-like chemical. On the right, the cells of a rat's DRG glow green with a marker for the opiate-like gene one month after it was injected into the rat's spinal fluid. On the left are DRG cells from a control rat injected with saline solution.
Credit: PNAS

A new kind of gene therapy could bring relief to patients suffering from chronic pain while bypassing many of the debilitating side effects associated with traditional painkillers.

Researchers at Mount Sinai School of Medicine injected a virus carrying the gene for an endogenous opioid--a chemical naturally produced by the body that has the same effect as opiate painkillers such as morphine--directly into the spinal fluid of rats. The injections were targeted to regions of the spinal cord called the dorsal root ganglia, which act as a "pain gate" by intercepting pain signals from the body on their way to the brain. "You can stop pain transmission at the spinal level so that pain impulses never reach the brain," says project leader Andreas Beutler, an assistant professor of hematology and medical oncology at Mount Sinai.

The injection technique is equivalent to a spinal tap, a routine procedure that can be performed quickly at a patient's bedside without general anesthesia.

Because it targets the spinal cord directly, this technique limits the opiate-like substance, and hence any side effects, to a contained area. Normally, when opiate drugs are administered orally or by injection, their effects are spread throughout the body and brain, where they cause unwanted side effects such as constipation, nausea, sedation, and decreased mental acuity.

Side effects are a major hurdle in treating chronic pain, which costs the United States around $100 billion annually in treatment and lost wages. While opiate drugs can be very effective, the doses required to successfully control pain are often too high for the patient to tolerate.

"The side effects can be as bad as the pain," says Doris Cope, director of the University of Pittsburgh Medical Center's Pain Medicine Program. Achieving the benefits of opiate treatment without their accompanying side effects, Cope says, would be a "huge step forward."

Beutler hopes to do just that. "Our strategy was to harness the strength of opioids but target it to the pain gate, and thereby create pain relief without the side effects that you always get when you have systemic distribution of opioids," he says.

Several groups have previously attempted to administer gene therapy for pain through spinal injections, but they failed to achieve powerful, long-lasting pain relief. The new technique produced results that lasted as long as three months from a single injection, and unpublished follow-up studies suggest that the effect could persist for a year or more.

Beutler credits his team's success to the development of an improved virus for delivering the gene. The team uses a specially adapted version of adeno-associated virus, or AAV--a tiny virus whose genome is an unpaired strand of DNA. All the virus's own genes are removed, and the human endogenous opioid gene is inserted in their place. Beutler's team also mixed and matched components from various naturally occurring AAV strains and modified the genome into a double-stranded form. These tweaks likely allow the virus to infect nerve cells more easily and stick around longer.

Once the virus is injected into the spinal fluid and makes its way into the nerve cells of the pain gate, it uses the host cells' machinery to churn out the opioid protein--which then goes to work blocking pain signals on their way to the brain. Normally, the gene is rarely activated. But the version used for therapy has no such limitations because the gene carried by the AAV has been modified to continuously produce the opioid chemical.

Cope says that using endogenous opioids is inherently superior to injecting synthetic opiate drugs directly into the spinal fluid, an approach that requires the installation of a pump in order to deliver the drugs over a long time period. "It's kind of a holy grail," she says. "If the body's own system for pain control were activated by genetic expression, that would be superior to an artificial medication."

In Beutler's study, which was published this week in PNAS, rats were surgically modified to have a stronger than usual response to pressure on their paws, mimicking the effects of so-called neuropathic pain. The gene-therapy treatment effectively restored the rats to a normal level of pain sensitivity. The team also tested a nonopioid gene, which produced comparable pain relief through an entirely different mechanism. But while the opioid gene's effects will likely extend to humans, who respond to opiates the same way rats do, the nonopioid's effects may be rat specific.

The Stockholm-based company Diamyd Medical has been developing a different approach to gene therapy for chronic pain that also bypasses the side effects of standard pain treatment. The approach uses a deactivated version of herpes simplex virus (HSV). HSV can be administered straight through the skin as it naturally finds and infects peripheral nerves and travels to the spinal cord on its own. Darren Wolfe of Diamyd says that this method is superior to spinal injection because it's safer and easier, and it can be administered repeatedly.

Because of these considerations, the HSV method may be preferable for treating localized pain. However, when chronic pain involves multiple areas of the body--as it often does with, for example, metastasized cancers--going straight to the pain gate could work more efficiently.

While both of these methods have proved effective in animal models of pain, their efficacy in human patients remains to be shown. Diamyd recently applied to the FDA to begin phase I clinical trials, and Beutler estimates that his approach could be tested on humans in as few as three years.


http://www.technologyreview.com/Biotech/20118/

TV for the Visually Impaired

Using a new algorithm, researchers are trying to enhance picture quality so that those with macular degeneration can enjoy television.

Enhanced vision: Researchers at the Schepens Eye Research Institute have developed software that lets users enhance the contrast of images on a television screen. In the image above, the screen is split: on the left is an unenhanced television picture, and on the right is a picture with the contrast enhanced.
Credit: Schepens Eye Research Institute
Multimedia
Watch enhanced video.
Compare normal and enhanced images.

Enjoying a favorite TV show can be difficult for someone with macular degeneration. Like many kinds of visual impairment, macular degeneration makes the images on the screen seem blurred and distorted. The finer details are often lost. Now researchers at the Schepens Eye Research Institute have developed software that lets users manipulate the contrast to create specially enhanced images for those with macular degeneration.

"Our approach was to implement an image-processing algorithm to the receiving television's decoder," says Eli Peli, a professor of ophthalmology at Harvard Medical School and the project leader. "The algorithm makes it possible to increase the contrast of specific size details."

The researchers focused their work on patients with age-related macular degeneration, a disease in which the macula--the part of the eye that's responsible for sharp, central vision--is damaged. According to the American Macular Degeneration Foundation, more than 10 million Americans suffer from the disease, which often leaves those afflicted with a central blind spot. A patient's remaining vision is often blurred, making it extremely difficult for people to watch television or even read the paper, says Mark O'Donoghue, clinic director of the New England College of Optometry's Commonwealth Avenue Clinic. "This is really new and fascinating to read about," says O'Donoghue. "I recognize the basic facts in the technology and the path of physiology in which [Peli] is doing this, and it is innovative."

Peli and his group currently have the new software running on a computer in their lab, but they're expecting to receive a prototype system built by Analog Devices in April 2008.

Peli's group discovered that patients suffering from macular degeneration could not perceive high-frequency waves in the visible spectrum, which left them unable to see fine details.

In order to give the patient a much better chance of discerning the image, the researchers designed an algorithm that specifically increases the contrast over the range of spatial frequencies that the visually impaired could see: the middle and low frequency waves. Ultimately, Peli says, the system enhances the contrast of the picture, and the result is that the finer details are more evident.

The contrast can be adjusted by a user in much the same way that one would change the volume on a TV using a remote control. O'Donoghue likens the system to a stereo equalizer for the eyes that allows TV watchers to fine-tune the picture.

To measure the amount of image enhancement that individuals prefer, the researchers recently conducted a study using 24 patients with visual impairments and 6 normal-sighted people. The subjects sat in front of a television and watched four-minute videos, adjusting the level of contrast with a remote control. The researchers found that all the subjects--even the normal-sighted people--wanted some level of enhancement, and the majority of the time a subject chose the same level of enhancement whether they were watching a dark scene or fast action, says Peli. (The amount of enhancement selected correlated to the severity of the subject's vision loss.) The study was published last month in the Journal of the Optical Society of America.

One day, this system could transform watching TV alone or with the family into a more "rewarding experience" by making it easier for people to pick out the objects of interest from their surroundings, says Tom O'Donnell, an assistant professor at the University of Tennessee's Hamilton Eye Institute.

Peli hopes that the system will eventually be incorporated into the menu options for all televisions. Ideally, people will have the option to see an enhanced view just as the hearing impaired have the option to call up captions, he says.


http://www.technologyreview.com/Infotech/20117/?a=f

Pioneer PDP-5080XD Review


50in Plasma
Picture
Features
Usability
Value
The best HD or SD picture bar none, at a price.
HD Ready: yes
Resolution: 1,365 x 768
Rating: 95%

Note

Alongside the Pioneer PDP-5080XD sits its identical (almost) sibling, the PDP-508XD. You will find both units retailing for around the same amount with the 5080XD having the advantage of a pedestal stand included in the package, and the 508XD endowed with some extra features.

These features include a USB port, picture-in-picture (Pip) functionality, sub-woofer output, ISF C3 (Custom Calibration Configuration) compatibility and Intelligent Brightness Control.

Design

You pay a little bit more for a Pioneer, but some of this premium has obviously been directed towards quality materials with the screen just oozing quality from every pour. The glossy black slim-framed PDP-5080XD will bring envious glances from friends and neighbours alike.

Features

With an all new 8th generation screen (dubbed 'Kuro'), the Pioneer PDP-5080XD promises Black Levels to match and better anything seen to date from other Plasma and LCD TV manufacturers.

Screen: 50in 16:9
Tuner:Digital
Sound System: Nicam
Resolution: 1,365 x 768
Contrast Ratio: 16,000:1
Other Features: PURE Drive 2HD, Digital Noise Reduction (DNR), Direct Colour Filter III, MPEG NR (Noise Reduction). .
Sockets: 3 HDMI, 3 SCART, Component Video, Composite Video, S-Video, PC input.

Even with a whole host of new technological innovations, this new generation of screens will not be found wanting connectivity wise. The PDP-5080XD features 3 HDMI inputs, 3 scarts, Component Video, Composite Video and S-Video. Additionally, there is a CI slot and USB port.

Picture processing technology on the PDP-5080XD comes in the shape of Pioneer's PURE DRIVE 2HD which has been designed to eliminate video noise by minimising intermediate analogue-to-digital and digital-to-analogue conversions. Image processing has been optimised for Plasma screens, and to work with Pioneer's latest 8th generation screen.

PURE DRIVE 2HD is complemented by i-CLEAR Drive which employs multi bit digital video processors to increase the range of gradation levels, producing more subtle colour differences.

The PDP-508XD retains all of the technological wizardry of its predecessors. Pioneer has once again deployed its 'Deep Waffle Rib' structure designed to reduce cross-pixel light and colour contamination.

Speakers are optional on the PDP-5080XD, Pioneer assuming that the screen will more often than not form the centre piece of a home cinema system.

Performance

The Black Level performance of the PDP-5080XD is a revelation, with a purity that stands head and shoulders above offerings from any other Plasma or LCD manufacturer out there. The subtlety of graduation across dark scenes is nothing short of spectacular.

High Definition (HD) viewing from Blu-ray or HD DVD is stunning, with a sharpness and level of detail that is at least as good as the best from other manufacturers.

Colours on the PDP-5080XD are exceptionally vibrant and display an almost complete absence of some of the faults and niggles displayed by lesser Plasma screens. Colour saturation is class leading, and the realism of skin tones for example, illustrates just how accomplished this screen is.

Standard Definition (SD) pictures are not the best we have seen but they are close, which is saying something for a 50in screen. SD pictures are easily the best we have seen on a large (50in +) Plasma or LCD even with low quality Freeview sources.

Now we come to the negative, although for a number of enthusiasts out there it won't be seen as such. To get the most out of the PDP-508XD takes a great deal of tweaking of the various picture processing settings. Without this tweaking, the PDP-5080XD has some noticeable shortcomings. In fact, 'out of the box' the PDP-5080XD is quite ordinary in some respects - in particular, fast action scenes suffer from a certain amount of juddering.

By tweaking the PDP5080XD's Drive Mode settings we were able to eradicate the motion problems, and the important point is that all aspects of the PDP-508XD can be improved. However, unlike Panasonic Plasma's for example, the PDP-5080XD has a 'specialist' feel about it, and this won't suit some.

This should take nothing away from the PDP-5080XD, which is in our opinion the best performing flat panel out there - it just requires a little tender loving care (we will be producing a more comprehensive 'tweaking' guide for the PDP-508XD soon).

Conclusion

If you want to pull this Pioneer straight out of the box and have little or no interest in 'tinkering' with its performance then the PDP-5080XD will make an ideal high end choice.

The extra features offered by the PDP-508XD will appeal to the TV enthusiasts out there, not necessarily those of you who enjoyed poking a screwdriver into the back of your old CRT, but those of you interested in the technicalities of getting the very best picture from your plasma.

The Pioneer PDP-5080XD has given us all at HDTVorg the feeling that the days of CRT outperforming plasma with SD sources are numbered. The PDP-5080XD is not quite there yet, but the difference is negligible, and with HD sources the PDP-5080XD in our opinion brings you the ultimate home viewing experience.

http://www.hdtvorg.co.uk/reviews/plasma/pioneer_pdp-5080xd.htm

Sunday, January 27, 2008

British Airport Installs Biometric Security

Manchester swaps traditional employee access cards for an iris recognition system.


Manchester has implemented what it claims is the U.K.'s first biometric access control system based on iris recognition. The system officially went live just before Christmas, and is used to control access to secure parts of the airport for airport workers.

Manchester Airport and the Department for Transport (DfT) partnered with Human Recognition Systems (HRS), a Liverpool-based identity management consultancy, to implement the system.

Currently, most U.K. airports used conventional access control cards to regulate movement of people from landside to airside within the airports. In order to enforce adequate security levels, security officers often have to man the entrance to each entry point, carrying out a variety of time consuming and costly security checks.

The biometric access control system at Manchester Airport augments its manual checking procedures. It uses iris recognition cameras to allow staff access into restricted zones.

"We have been working with Manchester airport now for the past five years," said Neil Norman, CEO at Human Recognition Systems, speaking to Techworld. "We trialled various biometric systems, including fingerprints, iris recognition, and hand geometry recognition. In the end, iris and hand geometry recognition were the two finalists, but it was felt that iris recognition was the best fit."

"With traditional access cards you are unable to monitor behavior of every single member of staff," he added. "People can share their identity, either via collusion or collaboration."

He pointed out that airports have to contend with a large workforce, 25,000 in the case of Manchester. This includes not just airport staff, but staff for the airline food companies, engineers, people to service fuel bowsers etc.

"This is not retinal scanning," said Norman. "We are looking to capture the colored doughnut around your pupil, i.e. the colored bit in your eye. An iris camera focuses on the eye and simply takes a picture of the doughnut to get a pattern."

"An algorithm extracts the iris pattern and stretches it out into something like a barcode, with is then compared against our database. There are also systems in place to make sure it is a real living eye in the picture, and not just a photograph of an eye."

"We get a sub second response time with this iris recognition system," said Norman. "It is faster than taking out an access card, swiping it, and then going through."

He dismisses as absolutely nonsense the urban legend that alcohol and blood shot eyes could affect an iris recognition system. Neither are glasses a problem, although sunglasses could be, as the camera needs to actually see the iris.

With the U.K. government stepping up its commitment to its controversial e-Borders program, iris recognition is being touted as the key biometric system for the government to track passenger movements.

Heathrow Airport has already piloted the technology last year with its miSense project, which saw more than 60,000 passengers processed by the Iris Recognition Immigration System - or IRIS. They had their iris pattern and passport details stored in a database to enable them to pass through border controls electronically without a face-to-face encounter with an official.


http://www.pcworld.com/article/id,141697-c,biometricsecuritydevices/article.html

Sony sells first OLED TV's

Consumers in Japan have snapped up the first batch of 200 OLED (Organic Light Emitting Diode) TV's from the electronics giant Sony for around ¥200,000 (£850).

The 11in XEL-1 is a mere 3mm thick and sports a claimed contrast ratio of 1,000,000:1, a figure LCD and Plasma manufacturers can only dream about.

Although planned monthly production of 2,000 units won't put a dent in Sony's estimated sales of 10 million LCD TV's a year, the retail availability of such a revolutionary technology is an important benchmark in TV hardware development.

The great advantage OLED displays have over LCD is that they do not require a backlight to function, and require far less energy as a result. OLED-based display devices also can be more effectively manufactured than LCDs and plasmas. A major problem however is the degradation of materials used to manufacture OLED TV's which limits their life span to about 40% of equivalent LCD or Plasma screens.

The XEL-1 is based on Sony's 'Super Top Emission' (STE) technology which uses a pitted organic film (the pits are called micro-cavities) to reflect out from the display light that has bounced back off the display's semi-transparent cathode, the negatively charged material used to send electrons through the OLED's organic film, generating light. Colours are produced through STE's colour filters above the cathode.

No word yet on whether Sony plan to introduce their OLED technology in Europe or the US.


http://hdtvorg.co.uk/news/articles/2008012601.htm

Five Nifty Features in Nikon's D300 Digital SLR

Get to know Nikon's mid-range digital SLR camera better through one user's shooting torture tests.


The best way to learn a camera is to take it out in the field, pushing the camera's limits as you shoot in both familiar circumstances and unknown environments. I had the opportunity to do just that with the new Nikon D300 digital SLR recently--and came away impressed by many of the usability touches I found.

At $1799 for the body only, the D300 represents Nikon's new midrange offering, falling between the D80 and the professional-level D3. This model replaces the two-year old Nikon D200, which earned a respectable PC World rating of 80.

In developing the D300, Nikon leveraged many of the technological evolutions it included in the $5000 Nikon D3. As someone who's shot with a pro-level camera (the Canon 1D Mark II), I hate making compromises when I step down to a lower-level SLR.

While I haven't shot this extensively with the D3, I can say that when I used the D300 last week, I didn't feel as if I was making too many compromises. What follows are five aspects of the D300's design that caught my attention.

Image Zoom: The D300 lets you zoom into an image but that's nothing special; that's expected of digital SLR. What was a pleasant surprise--and new to the D300--was how quickly and easily I could zoom in *and* pan around the image to check how clear the shot was--without first entering the playback mode. Press the + magnifying glass button (second up from the bottom, along the left side of the 3-inch, 920,000-dot LCD), and you move into the image. Hold the button in, and the zoom moves deeper still. A picture-in-picture box appears at the lower left; a yellow box appears inside it, indicating how zoomed into the image you are. Panning around within the image was a breeze, thanks to the Nikon's multidirectional navigation pad control; for panning, I much preferred this control to Canon's stiff and small joystick control. What struck me here was how facile and speedy the camera's controls and internal processing made it to spot check the clarity of my 12.1 megapixel images.

Live View: Shooting with a point-and-shoot digital camera has spoiled me: There are times I want nothing more than to frame my image within the LCD screen, and not through the viewfinder. Don't get me wrong--I rely heavily on the viewfinder as well--but some shots, such as overhead shots looking down into a crowd or a group of people, or lower-surface shots that are not at eye-level, are just plain easier when composed through the LCD. I haven't found all Live View functions on SLRs intuitive (Canon's EOS 40D requires you to make some adjustments to enable Live View), but the D300's worked quite well in hand-held mode (you can also use the Live View in tripod mode, but I didn't try that). Switching between Live View and the optical viewfinder requires a combination button press and dial turn, but I didn't find that presented a problem when I wanted to quickly switch from one to the other. If I were trying to switch in time to catch a split-leap on the balance beam while shooting pictures of gymnastics, I would have had a problem, but moving between modes was easier than menu-based switching.

Inside the Ferry Building with no flash (left) and with flash.
Low-light Handling: While PC World's tests have yet to be completed, in the field, the D300 seemed to do a solid job handling low-light situations. Whether I was shooting the gymnasts in a poorly lit gymnasium at Stanford University, or outside of AT&T Park at night, or inside San Francisco's iconic Ferry Building, I found the camera surprised me time and again with how it could capture an image without the flash--with what appeared to be an reasonable degree of noise and sharpness (I used a 15-year-old 50mm 1.8 lens for the gymnastics, and Nikon's 18-200mm lens with Vibration Reduction for the latter two tasks). The Auto ISO feature works in all modes, up to 3200 ISO--which means you don't have to think about what ISO is best when you're shooting.

Changing Auto-focus Points: In shooting, I found it notably simple to change the auto-focus (up to 51 on the D300, vs. 11 focus points on the D200) on-the-fly. Rather than cycling through focus points, I could instead use the multidirectional navigation pad to choose which spot I wanted the focus point. This proved a great convenience when I was shooting a mask with depth, and I wanted to both compose the image and have the focus be on a specific part of the mask.

Information Displays: I found the D300's information displays well-presented overall (as on the D200). Whether I was looking at the LCD on top of the camera, the large-type replication of that information display on the rear LCD (new to the D300: The rear LCD automatically changes color, depending upon whether it's dark or light, turns to a black background with white text for easier readability), or looking at an image's related shooting information--the pertinent info was clearly and concisely presented.

We'll know more when the camera comes out of testing and is the subject of a full PCW review. Stay tuned.


http://www.pcworld.com/article/id,141719-c,digitalcameras/article.html