Saturday, 19 March 2011

Johnald's Fantastical Daily Link Splurge

Johnald's Fantastical Daily Link Splurge

Possible Early Warning Sign for Market Crashes

Posted: 18 Mar 2011 10:11 AM PDT

Complexity researchers who study the behavior of stock markets may have identified a signal that precedes crashes.

They say the telltale sign is a measure of co-movement, or the likelihood of stocks to move in the same direction. When a market is healthy, co-movement is low. But in the months and years before a crash, co-movement seems to grow.

Regardless of whether stock prices go up or down or stay the same, they do so in tandem. People are copying each other, and a small nudge can send everyone in the same direction. The system appears primed for collapse.

"One of the most important things happening now is that economists are trying to understand, what is systemic risk? When is the entire system vulnerable to disaster? Our results show that we have a direct, unambiguous measure of that vulnerability," said Yaneer Bar-Yam, president of the New England Complex Systems Institute.

Seen through an econophysicist's eyes, a stock market panic is an avalanche.

Bar-Yam's findings, released Feb. 13 on arXiv, are part of an emerging research field known as econophysics. It applies to economics insights from the physical world, especially from systems in which networks of interacting units produce radical collective behaviors.

Heated water turning to gas is one such behavior, known technically as a phase transition. Another is snow gathering into an avalanche. Seen through an econophysicist's eyes, a stock market panic is an avalanche, too.


Using a phase-transition model, Bar-Yam's group analyzed patterns of movement in the stock market. At the beginning of the 2000s, co-movement was low. On any given day, about half the stocks were moving up or down. By 2008, shortly before the crash, co-movement was absolute. People were no longer making independent decisions, but copying others.

"There's a break point where the system is flat — equally likely to have any number of stocks moving together on a particular day," said Bar-Yam. "And if you see these collective behaviors building up, then you know you're in trouble."

At top, a metric of stock co-movement during the 2000s. As it gets closer to zero, individual stocks are more likely to move up or down in the same direction. At bottom is the Russell 3000 Index.

After expanding the analysis back to 1985, they found periods of increasing co-movement within four years before each major crash, though never so starkly as before 2008. The researchers also propose that increasing co-movement fuels large, single-day market drops.

Jeffrey Fuhrer, researcher director at the Federal Reserve Bank of Boston, called the results intriguing but preliminary, requiring more rigorous statistical examination.

"As an initial pass, it's an interesting idea," he said, but doesn't yet distinguish when investors respond rationally and independently to the same information, such as a rise in fuel prices, or move reflexively as a herd.

However, the line between those trends may be blurry. According to Bar-Yam's group, external stresses — fuel prices, war, the perception of market bubbles — may increase the market sensitivity, making it more vulnerable to panic. So might changes in the very structure of markets, from their increasingly interlocking nature to instant-communication tools.

Fuhrer's cautions were echoed by econophysicist Tobias Preis of the Swiss Federal Institute of Technology. "One should be very careful about generalization to predict future crises," he said. "The most important point is to quantify this risk. That would be a huge step forward."

If co-movement does prove to be a reliable early warning signal, it's an open question how to make use of it. "That is one of the $64,000 questions," said Fuhrer.

Whereas bailing out a company is relatively simple, intervening in the dynamics of a system is not. But the first step is understanding that markets follow rules we're just beginning to understand.

"The financial crisis has shown that mainstream economic theories have limitations that need to be overcome," said Dirk Helbing of the Swiss Federal Institute of Technology, who specializes in modeling crowd behavior. "Economic systems have become much more complex, and complex systems have certain features — cascading effects, systemic shifts. This calls for new theoretical approaches."

Images: 1) NASDAQ © 2010. 2) arXiv.

See Also:

Citation: "Predicting economic market crises using measures of collective." By Dion Harmon, Marcus A. M. de Aguiar, David D. Chinellato, Dan Braha, Irving R. Epstein, Yaneer Bar-Yam. arXiv, Feb. 13, 2011.

Understanding Japan’s Nuclear Crisis

Posted: 18 Mar 2011 09:00 AM PDT

By John Timmer, Ars Technica

Following the events at the Fukushima Daiichi nuclear reactors in Japan has been challenging. At best, even those present at the site have a limited view of what's going on inside the reactors themselves, and the situation has changed rapidly over the last several days. Meanwhile, the terminology involved is somewhat confusing—some fuel rods have almost certainly melted, but we have not seen a meltdown; radioactive material has been released from the reactors, but the radioactive fuel currently remains contained.

Over time, the situation has become a bit less confused, as cooler heads have explained more about the reactor and the events that have occurred within it. What we'll attempt to do here is aggregate the most reliable information we can find, using material provided by multiple credible sources. We've attempted to confirm some of this information with groups like the Nuclear Regulatory Commission and the Department of Energy but, so far, these organizations are not making their staff available to talk to the press.

Inside a Nuclear Reactor

Nuclear reactors are powered by the fission of a radioactive element, typically uranium. There are a number of products of this reaction, but the one that produces the power is heat, which the fission process gives off in abundance. There are different ways to extract electricity from that heat, but the most common way of doing so shares some features with the first steam engines: use it to boil water, and use the resulting pressure to drive a generator.

Radioactivity makes things both simpler and more complex. On the simpler side, fission will readily occur underwater, so it's easy to transfer the heat to water simply by dunking the nuclear fuel directly into it.


In the reactor design used in Japan, the fuel is immersed in water, which boils off to generate power, is cooled, and then returns to the reactor. The pressure vessel and primary containment keep radioactivity inside. (Ars Technica)

Unfortunately, the radioactivity complicates things. Even though the fuel is sealed into rods, it's inevitable that this water will pick up some radioactive isotopes. As a result, you can't just do whatever you'd like with the liquid that's been exposed to the fuel rods. Instead, the rods and water remain sealed in a high-pressure container and linked pipes, with the hot water or steam circulated out to drive machinery, but then reinjected back into the core after it has cooled, keeping a closed cycle.

The water recirculation doesn't just let us get power out of the reactor; it's essential to keeping the reactor core cool. Unless the heat of decay is carried away from the core, its temperature will rise rapidly, and the fuel and its structural support will melt.

The Fission Reaction

Uranium ore. (Marchin Wichary/Flickr)

On its own, the uranium isotope used in nuclear reactors will decay slowly, releasing a minimal amount of heat. However, one of the decay products is a neutron, which can strike another atom and induce that to split; other neutrons are produced as the products of that split decay themselves. At high enough densities, this chain reaction of neutron-induced fission can produce a nuclear explosion. In a nuclear reactor, the fuel density is low enough that this isn't a threat, and the rate of the fission can be controlled by inserting or removing rods of a material that absorbs neutrons, typically boron.

Completely inserting control rods to limit uranium's fission, however, doesn't affect what's happened to the products of previous reactions. Many of the elements that are produced following uranium's split are themselves radioactive, and will decay without needing any encouragement from a neutron. Some of the neutrons from the reactor will also be absorbed by atoms in the equipment or cooling water, converting those to radioactive isotopes. Most of this additional radioactive material decays within the span of a few days, so it's not a long-term issue. But it ensures that, even after a reactor is shut down by control rods, there's enough radioactive decay around to keep things hot for a while.

All of which makes the continued operation of the plant's cooling system essential. Unfortunately, cooling system failures have struck several of the reactors at Fukushima Daiichi.

Surviving the Quake, But Not the Tsunami

Because cooling is so essential to a plant's operation, there are a few layers of backups to keep the pumps running. For starters, even if the reactors themselves are taken offline, the coolant pumps can receive power from offsite; this option was eliminated by the earthquake itself, which apparently cut off the external power to Fukushima. The earthquake also triggered a shutdown of the reactors, removing the obvious local source of power to the pumps. At this point, the first backup system kicked in: a set of on-site generators that burn fossil fuels to keep the equipment running.

Those generators lasted only a short while before the tsunami arrived and swamped them, flooding parts of the plant's electrical system in the process. Batteries are in place to allow a short-term backup for these generators; it's not clear whether these failed due to the problems with the electrical system, or were simply drained. In any case, additional generators were slow to arrive due to the widespread destruction, and didn't manage to get the pumps running again when they did.

As a result, the plants have been operating without a cooling system since shortly after the earthquake. Even though the primary uranium reaction was shut down promptly, the reactor cores have continued to heat up due to secondary decay products.

Ugly Possibilities

Without cooling, there are a number of distinctly ugly possibilities. As water continues to be heated, more steam will be generated within the reactor vessel, increasing the pressure there, possibly to the point where the vessel would fail. The reactor vessel would burst into a primary containment vessel, which would limit the immediate spread of radioactive materials. However, the rupture of the reactor vessel would completely eliminate any possibility of restoring the coolant system, and might ultimately leave the reactor core exposed to the air.

And that would be a problem, since air doesn't carry heat away nearly as efficiently as water, making it more likely that the temperatures would rise sufficiently to start melting the fuel rods. The other problem with exposing the fuel rods to air is that the primary covering of the rods, zirconium, can react with steam, reducing the integrity of the rods and generating hydrogen.

To respond to this threat, the plant's operators took two actions, done on different days with the different reactors. To begin with, they attempted to pump cold sea water directly into the reactors to replace the boiled-off coolant water. This was not a decision made lightly; sea water is very corrosive and will undoubtedly damage the metal parts of the reactor, and its complex mixture of contents will also complicate the cleanup. This action committed the plant operators to never running it again without a complete replacement of its hardware. As an added precaution, the seawater was spiked with a boron compound in order increase the absorption of neutrons within the reactor.

The second action involved the bleeding off of some pressure from the reactor vessel in order to lower the risk of a catastrophic failure. This was also an unappealing option, given that the steam would necessarily contain some radioactivity. Still, it was considered a better option than allowing the container to burst.

This decision to bleed off pressure ultimately led to the first indications of radioactivity having escaped the reactor core and its containment structure. Unfortunately, it also blew the roof off the reactor building.

Hard Choices to Bad Results

As seen in some rather dramatic video footage, shortly after the pressure was released, the buildings housing the reactors began to explode. The culprit: hydrogen, created by the reaction of the fuel casing with steam. The initial explosions occurred without damaging the reactor containment vessel, meaning that more significantly radioactive materials, like the fuel, remained in place. Larger increases in radioactivity, however, followed one of the explosions, indicating possible damage to the containment vessel, although levels have since fluctuated.

However, the mere presence of so much hydrogen indicated a potentially serious issue: it should only form if the fuel rods have been exposed to the air, which indicates that coolant levels within the reactor have dropped significantly. This also means that the structural integrity of the fuel rods is very questionable; they've probably partially melted.

Part of the confusion in the coverage of these events has been generated by the use of the term "meltdown." In a worst-case scenario, the entire fuel rod melts, allowing it to collect on the reactor floor, away from the moderating affect of any control rods. Its temperature would soar, raising the prospect that the material will become so hot that it will melt through the reactor floor, or reach a source of water and produce an explosive release of steam laced with radioactive fuel. There is no indication that any of this is happening in Japan at the moment.

Still, the partial melting of some fuel does increase the chances that some highly radioactive material will be released. We're nowhere near the worst case, but we're not anywhere good, either.

An additional threat has recently become apparent, as one of the inactive reactors at the site suffered from an explosion and fire in the area where its fuel is being stored. There is almost no information available about how the tsunami affected the stored fuel. Hydrogen is again suspected to be the source of the explosion, which again suggests that some of the fuel rods have been exposed to the air and could be melting. It's possible that problems with the stored fuel contributed to the recent radiation releases, since there isn't nearly as much containment hardware between the storage area and the environment.

Again, plans have been made to add sea water to the storage area, both by helicopter drops attempted earlier today, and through standard firefighting equipment.

Where We Stand

So far, the most long-lived radioactive materials at the site appear to remain contained within the reactor buildings. Radioisotopes have and continue to escape containment, but there's no indication yet that these are anything beyond secondary decay products with short half-lives.

Although radiation above background levels has been detected far from the reactor site, most of this has been low-level and produced by short-lived isotopes. Prevailing winds have also sent a lot of the radioactive material out over the Pacific. As a result, most of the problems with radioactive exposure have been in the immediate vicinity of the Fukushima Daiichi reactors themselves, where radiation has sometimes reached threatening levels; it's been possible to hit a yearly safe exposure limit within a matter of hours at times. Areas around the reactors have been evacuated or subject to restrictions, but it's not clear how far out the areas of significant exposure extend, and they may change rapidly.

All of this is severely complicating efforts to get the temperatures under control. Personnel simply can't spend much time at the reactor site without getting exposed to dangerous levels of radioactivity. As a result, all of the efforts to get fresh coolant into place have been limited and subject to interruption whenever radiation levels spike. The technicians who continue to work at the site are putting their future health at risk.

There is some good news here, as each day without a critical failure allows more of the secondary radioactive materials to decay, lowering the overall risk of a catastrophic event. In the meantime, however, there's little we can do to influence the probability of a major release of radioactive material. Getting seawater into the reactors has proven to be hit-or-miss, and we don't have a strong sense of the structural integrity of a lot of the containment buildings at this point; what's happening in the fuel storage areas is even less certain. In short, our only real option is to try to get more water in and hope for the best.

Future of Nuclear Energy

Nuclear power plays a big role in most plans to limit the use of fossil fuels, and the Department of Energy has been working to encourage the building of the first plants in decades within the US. The protracted events in Japan will undoubtedly play a prominent role in the public debate; in fact, they may single-handedly ignite discussion on a topic that the public was largely ignoring. The take-home message, however, is a bit tough to discern at this point.

In some ways, the Japanese plants, even though they are an old design, performed admirably. They withstood the fifth-largest earthquake ever recorded, and the safety systems, including the automatic shutdown and backup power supplies, went into action without a problem. The containment systems have largely survived several hydrogen explosions and, so far, the only radioactive materials that have been released are short-lived isotopes that are concentrated in the plant's vicinity. If things end where they are now, the plants themselves will have done very well under the circumstances.

But, as mentioned above, ending where we are now is completely beyond our control, and that highlights some reasons why this can't be considered a triumph. Some of the issues are in the design. Although the plant was ready for an extreme event, it clearly wasn't designed with a tsunami in mind—it is simply impossible to plan for every eventuality. However, this seems to be a major omission given the plant's location. It also appears that the fuel storage areas weren't nearly as robustly designed as the reactors.

Once the cooling crisis started, a set of predictable issues cropped up. We can never send humans inside many of the reactor areas, leaving us dependent upon monitoring equipment that may not be working or reliable during a crisis. And, once radiation starts to leak, we can't send people to many areas that were once safe, meaning we've got even less of an idea of what's going on inside, and fewer points to intervene at. Hardware that wasn't designed for some purposes, like pumping sea water into the reactor vessel, hasn't worked especially well for the emergency measures.

On balance, the safety systems of this reactor performed reasonably well, but were pushed up against a mixture of unexpected events and design limits. And, once anything starts to go wrong with a nuclear reactor, it places the entire infrastructure under stress, and intervening becomes a very, very difficult thing to do.

This latter set of issues mean that the surest way to build a safe nuclear plant is to ensure that nothing goes wrong in the first place. There are ways to reduce the risk by adding more safety and monitoring features while tailoring the design to some of the most extreme local events. But these will add to the cost of a nuclear plant, and won't ever be able to ensure that nothing goes wrong. So, deciding on if and how to pursue expanded nuclear power will require a careful risk analysis, something the public is generally ill-equipped for.

Top image: Ars Technica.

Source: Ars Technica.

See Also:

Spacecraft Swings Into First Orbit Around Mercury

Posted: 18 Mar 2011 08:14 AM PDT

NASA's Messenger spacecraft swung into position around Mercury Thursday night, making it the first spacecraft ever to orbit the innermost planet.

Engineers at the Johns Hopkins University Applied Physics Laboratory in Maryland, 96 million miles from Mercury, received the signal confirming that Messenger (MErcury Surface, Space ENvironment, GEochemistry and Ranging) had completed its final maneuver at 9:10 pm EDT.

To slow down enough to get caught in Mercury's gravitational field, Messenger fired its main thruster for 15 minutes. The burn slowed the spacecraft by 1,929 mph and used up 31 percent of its original fuel supply.

After finishing the burn, Messenger rotated to face the Earth by 9:45 p.m., and started transmitting data. Engineering and operations teams confirmed the maneuver went according to plan.

The event marks the end of a 6½-year journey for Messenger, which has made 12 laps around the solar system, two flybys past Earth, one past Venus and three past Mercury since launching in August 2004.

Although engineers still need to do some analysis to figure out the spacecraft's exact orbit, they expect Messenger to swoop around Mercury in a highly elliptical orbit once every 12 hours. It will dip within 120 miles of Mercury's surface at its closest point, and go out to 9,320 miles at its farthest.

The orbit goes nearly pole-to-pole, offset by about 7 degrees. That slight tilt is to help get a handle on the planet's gravitational field, said principal investigator Sean Solomon, a planetary scientist at the Carnegie Institution of Washington, in a press conference March 15.

Measurements of the gravitational field "will tell us something about Mercury's composition, the size of the core and the structure of that core," he said.


One of the mission's main objectives is to figure out why Mercury's core is so big compared to the cores of the other rocky planets. Another is to make high-resolution maps of the whole planet, some of which has still never been seen.

"Many on the science team have been involved from the very beginning," Solomon said. "We are extremely excited to begin that mapping."

Scientists also plan to search for water ice in craters at the poles which, despite Mercury's proximity to the sun and scorching daytime temperatures, are stuck in eternal freezing shadow.

The spacecraft's seven science instruments were turned off for orbit insertion, but they will reactivate March 23. The first orbital image, planned for March 29, will include some uncharted regions near Mercury's south pole.

The science phase of the mission will begin April 4. The Messenger team will release data to the science community at six-month intervals, but will release images at least once a day throughout the mission, Solomon said.

"In addition to the global imaging we'll be doing, we've targeted more than 2,000 areas for ultra-high-res with our narrow-angle camera. Many of them were not discovered until flybys," he said. "We've got a long list."

Images: 1) Artist's conception of Messenger approaching Mercury. 2) The target area for Messenger's first image from orbit, including never-before-seen terrain. (NASA/Johns Hopkins University Applied Physics Laboratory/Carnegie Institution of Washington)

See Also:

Friday, 18 March 2011

Johnald's Fantastical Daily Link Splurge

Johnald's Fantastical Daily Link Splurge

Seasonal Methane Rain Discovered on Titan

Posted: 17 Mar 2011 12:12 PM PDT

Spring may bring methane showers to the deserts of Titan, Saturn's largest moon. NASA's Cassini spacecraft recently saw a large, dark puddle appear in the wake of a storm cloud at the moon's dune-filled equator.

"It's the only easy way to explain the observations," said planetary scientist Elizabeth Turtle of Johns Hopkins University Applied Physics Laboratory, lead author of a study March 18 in Science. "We're pretty confident that it has just rained on Titan."

Aside from Earth, Titan is the only world known to have liquid lakes, clouds and a weather cycle to move moisture between them. But on chilly Titan, where temperatures plunge to -297 degrees Fahrenheit, the frigid lakes are filled with liquid methane and ethane, not water.

Titan's lakes are also exclusively confined to the poles. The moon's dry central regions are covered in rippling dunes and arid deserts.

But the dunes are crisscrossed by a network of dry channels, suggesting a wetter past. In 2006, Cassini observations showed hints of drizzle at the equator, but not enough rain to explain the riverbeds.

"So the question was, 'When was the last rainfall near the equator of Titan?'" said planetary scientist Tetsuya Tokano of the University of Cologne in Germany, who was not involved in the new work. Some researchers suggested that the rivers were a relic of a bygone era, or carved by things other than rain.

"This observation by Turtle et al. showed for the first time that there is rainfall on present Titan, not merely millions of years ago but at the present Titan," Tokano said. "This is extraordinary."


In the new study, Turtle's team describes a large cloud system moving eastward across Titan's equator on Sept. 27, 2010. By October, observations show, a dune field called Belet that lies east of the clouds suddenly darkened. The dark patch extended for more than 190,000 square miles, and started fading fast. Some spots that were dark on Oct. 14 were bright again by Oct. 29, and even more bright spots were visible on Jan. 15.

Turtle thinks the shadow is wet ground after rainfall, like a sidewalk darkened by a shower. Titan's winds aren't strong enough to wreak such sudden or vast changes, she says, and it's doubtful that the kind of explosive volcanic activity that could explain the dark patch is possible on Titan.

It's not clear how much rain fell, she adds. Some areas could have flooded or sustained small puddles, but it may just be that the surface got wet.

The showers were probably prompted by Titan's changing seasons. Cassini has been orbiting Saturn since 2004, but since a full year on Saturn — and therefore all its moons — lasts 29 Earth years, the spacecraft has only observed one 7-year season on Titan. Astronomers saw storms and rain at Titan's south pole during the summer, and then the clouds cleared after the spring equinox in August 2009.

"It's kind of the equivalent on Titan right now of early April, just into northern spring," Turtle said. "What we think triggered this huge storm is that the weather patterns are seasonal." Major cloud patterns move north as the southern summer ends, similar to the way they do on Earth, she says. The only difference is, Earth's tropics sustain rain clouds year round. On Titan, the equator may see rain only a few times a year.

The difference comes, at least in part, from Titan's leisurely rotation rate, Tokano said. Titan takes 16 Earth days to rotate once, meaning its atmospheric circulation patterns are somewhat more simple. Titan's clouds shift quickly from north to south, filling the polar lakes with rain but mostly leaving the equator out to dry.

As for whether the spring showers are good news for the possibility life on Titan, Turtle and Tokano are agnostic.

"There's no liquid water involved in any of the processes we're describing here, so life as we know it can't exist," Turtle said. "But there's clearly so much scope for prebiotic chemistry on Titan…. Understanding Titan better in general helps us to understand what the possibilities are."

Images: 1) NASA/JPL/Space Science Institute. 2) P. Huey/Science AAAS

"Rapid and Extensive Surface Changes Near Titan's Equator: Evidence of April Showers." E.P. Turtle, J.E. Perry, A.G. Hayes, R.D. Lorenz, J.W. Barnes, A.S. McEwen, R.A. West, A.D. Del Genio, J.M. Barbara, J.I. Lunine, E.L. Schaller, T.L. Ray, R.M.C. Lopes, E.R. Stofan. Science, Vol 331, March 18, 2011. DOI: 10.1126/science.1201063.

See Also:

Japan Quake Epicenter Was in Unexpected Location

Posted: 17 Mar 2011 09:43 AM PDT

Japan has been expecting and preparing for the "big one" for more than 30 years. But the magnitude-9.0 temblor that struck March 11 — the world's fourth biggest quake since 1900 — wasn't the catastrophe the island nation had in mind. The epicenter of the quake was about 80 miles east of the city of Sendai, in a strip of ocean crust previously thought unlikely to be capable of unleashing such energy.

"This area has a long history of earthquakes, but [the Sendai earthquake] doesn't fit the pattern," says Harold Tobin, a marine geophysicist at the University of Wisconsin-Madison. "The expectation was high for a 7.5, but that's a hundred times smaller than a 9.0."

Understanding where big earthquakes will emerge is extraordinarily difficult, and nowhere more so than Japan. The northern part of the island nation sits at the intersection of four moving pieces of the Earth's crust. Where one tectonic plate slides beneath another, forming a subduction zone, sudden slippages can unleash tremendous amounts of energy.

The Sendai earthquake occurred at the Japan Trench, the junction of the westward-moving Pacific Plate and the plate beneath northern Japan. Historical records, one of seismologists' best tools for identifying areas at risk, suggest that this segmented fault has produced several earthquakes bigger than 7.0 in the 20th century alone — but none bigger than 8.0.

That's why the Japanese government has long focused on the nation's southern coast and the northward-moving Philippine Plate, which has a proven ability to generate large quakes. Quakes larger than 8.0 tend to strike the Tokai region in central Japan every 150 years or so, with the last big one appearing in 1854.

In 1976 researcher Katsuhiko Ishibashi of Kobe University warned that Suruga trough, a subduction zone just off the coast of Tokai, was due for a big one. In the years since, the Japanese government and research community have braced for this predicted Tokai earthquake — deploying GPS systems to monitor the movements of islands on the Philippine Plate and even generating computer simulations of how crowds in train stations might behave during such an event.

Current thinking about the mechanisms that govern megaquakes also favored the Philippine Plate as the site of greatest risk. About 80 percent of all earthquakes above magnitude 8.5 occur at the edges of such geologically young, warm tectonic plates. Kilometer-thick sediment layers carried by these plates are thought to grind smooth patches that allow long stretches of fault to rupture at once. The Pacific Plate, some of the oldest ocean crust on the planet, doesn't fit this description.


But preliminary computer simulations at Harvard that crunched early data from the Sendai quake suggest that a long stretch of the Japan Trench ruptured during the event — about 390 kilometers [240 miles]. Multiple segments that usually behave independently broke over the course of two to three minutes.

"It looks like three of the segments all slipped together," says Miaki Ishii, a seismologist at Harvard. "There is some evidence that a fourth may have been involved as well." She doesn't know why these particular segments ruptured together, or why other similar segments nearby didn't join them.

What does seem to be clear is that the slip happened in a relatively shallow region of the subduction zone. According to computer simulations run by geophysicist Chen Ji at the University of California, Santa Barbara, the quake originated 8 to 20 kilometers [5 to 12 miles] below the ocean floor. The shallower an earthquake, the more easily it flexes the Earth's crust, raising a mountain of water that can turn into a tsunami. The Sendai quake lifted the seafloor several meters and generated a tsunami up to 7 meters [23 feet] high.

"We're learning that we can't discount any of these big subduction zones," says Tobin. "They're all capable of producing large earthquakes." The magnitude-9.1 earthquake that struck Sumatra in 2004 also broke the rules: It, too, happened on the edge of an old piece of crust, hurling a tsunami across the Indian Ocean that was more deadly than any in recorded history.

In the United States, seismologists are now eyeing the Cascadia fault zone that flanks Oregon and Washington, which last gave way in 1700 to produce the largest known earthquake in North American history.

"Perhaps the earthquake in Japan shouldn't have been as surprising as it was," says Stanford seismologist Greg Beroza.

Beroza explains that deposits of sand found kilometers from shore have revealed a large tsunami that struck the Sendai area during the Jogan earthquake of 869. Ever since this magnitude-8.0+ quake, the Pacific Plate has been moving more than 8 centimeters [3 inches] per year — a tectonic sprint — pushing against its neighbor plate and perhaps building a tremendous amount of strain.

Seismologists hope that the detailed Sendai earthquake data collected by Japan's advanced monitoring technologies — hundreds of sensors spaced an average of 20 to 30 kilometers [12 to 18 miles] apart across the Japanese islands — will lead to a better understanding of subduction zone quakes. Researchers will also analyze the emerging pattern of aftershocks, which now includes at least three bigger than 7.0 and dozens bigger than 6.0.

But being able to spot signs far in advance of a big earthquake — currently far beyond the reach of modern science — may require digging deeper. Tobin and his Japanese colleagues have for the first time embedded strain sensors directly inside a subduction zone, the Nankai trough located southwest of Tokai.

Large earthquakes have struck this region every 100 to 120 years, from 686 to 1946. The researchers hope to catch the next big one in the act and find a warning sign that could provide more than a minute's notice that a monster quake is on its way.

Images: The March 11 Sendai earthquake (epicenter shown as star) occurred when the westward-moving Pacific Plate took a sudden dive beneath northern Japan's plate, the identity of which is disputed among scientists. (USGS)

See Also:

Crop Tops: Strange Agricultural Landscapes Seen From Space

Posted: 17 Mar 2011 04:00 AM PDT

<< Previous | Next >>

Agriculture is one of the oldest and most pervasive human impacts on the planet. Estimates of the land surface affected worldwide range up to 50 percent. But while driving through the seemingly endless monotony of wheat fields in Kansas may give you some insight into the magnitude of the change to the landscape, it doesn't compare to the view from above.

When seen from space, those same boring wheat fields are transformed into a strange and even beautiful pattern. Some of the most arresting agricultural landscapes occur in the Midwestern United States in areas that rely on center-pivot irrigation (shown at right). The area pictured above near Garden City, Kansas, is being farmed to the point of resembling abstract art or a Magic Eye illusion. Groundwater from the Ogallala Aquifer is used to grow corn, wheat and sorghum in the region.

The image above, taken by the USGS' Landsat 7 satellite on Sept. 25, 2000, is a false-color composite made using data from near infrared, red and green wavelengths and sharpened with a panchromatic sensor. The red areas actually represent the greenest vegetation. Bare soil or dead vegetation ranges from white to green or brown.

The image below is a simulated true-color shot from the same county in Kansas taken June 24, 2001 by NASA's Terra satellite. Bright greens are healthy, leafy crops such as corn; sorghum would be less mature at this time of year and probably a bit paler; wheat is ready for harvest and appears a bright gold; brown fields have been recently harvested. The circles are perfectly round and measure a mile or a half mile in diameter.

In this gallery, we've collected some of the most interesting views of crops from space, including rice paddies in Thailand, cotton fields in Kazakhstan and alfalfa growing in the middle of the Libyan desert.

Images: 1) USGS/NASA. 2) USGS. 3) NASA.

<< Previous | Next >>

See Also:

Wednesday, 16 March 2011

Johnald's Fantastical Daily Link Splurge

Johnald's Fantastical Daily Link Splurge

Robot Nurses Are Less Weird When They Don’t Talk

Posted: 16 Mar 2011 11:05 AM PDT

Medical patients would probably be ok with semi-autonomous robots tending to them, but only if the robots don't talk to them first.

Robotics researchers tested whether a verbal explanation from a robot would help people feel more comfortable with the robot administering care, but found that precisely the opposite was true.

"Robotics has mostly been about teaching machines how to not touch people, walls, chairs and other objects," said robotics researcher Tiffany Chen of the Georgia Institute of Technology, part of a team that presented the study March 9 at a human-robot interaction conference in Switzerland. "This is one of the first steps toward understanding what happens when robots touch people."

Most semi-autonomous robots do precise or dangerous grunt work, such as assemble automobiles or help neutralize improvised bombs. Now robots have advanced to the point that they are ready to take on more delicate work, such as assisting nurses. But the bots may not be as accepted in a hospital as they are in a factory.

"If we want robots to be successful at health care, we're going to need to think about how do we make those robots communicate their intention and how do people interpret the intentions of the robots," biomedical engineer Charlie Kemp of the Georgia Institute of Technology said in a video about the work.

Kemp and his team programmed a robot named Cody to gently wipe its hand across volunteers' arms, as if cleaning them, or administer a soothing touch. In some trials, Cody explained to people with a synthetic female voice what it was about to do, and in others it didn't say anything until after touching the participants.

People generally didn't mind being touched by Cody overall, but were less comfortable with the robot when it spoke to them beforehand. And participants were more accepting of a potentially necessary medical touch than an attempt at a soothing touch by the robot.

"The results of the voice timing surprised us. We thought people would want to be told something like 'I'm going to clean you,' and then the robot cleans. But the opposite was true," Chen said.

Image: Cody the robot touches one of 56 study participants. (Georgia Tech)

See Also:

Oldest Female Elephants Have Best Memory

Posted: 16 Mar 2011 10:00 AM PDT

Not to cause dinner table shouting or new excesses of political punditry — but in a test of a particular leadership skill among elephants, age and experience really did trump youth and beauty.

Elephant matriarchs 60 years of age or older tended to assess threats in a simulated crisis more accurately than younger matriarchs did, says Karen McComb of the University of Sussex in Brighton, England. When researchers played recordings of various lion roars, elephant groups with older matriarchs grew especially defensive at the sound of male cats. Younger matriarchs' families underreacted, McComb and her colleagues report in an upcoming Proceedings of the Royal Society of London B.

The older females have it right, McComb says. Male lions rarely attack an elephant, but when they do, they can be especially deadly: A single male can bring down an elephant calf.

Studying leadership among animals has become an active research area. "People have become intrigued by some of the parallels between the sorts of characteristics that seem to define a leader in animals and in humans," McComb says.


The new elephant approach "is definitely novel," says psychologist Mark van Vugt of VU University Amsterdam, who studies the evolution of leadership. The new paper extends a general observation — that older individuals show more leadership in tasks involving specialized knowledge — into situations involving threats.

"There is an interesting trade-off here, which certainly applies to humans and maybe elephants as well," van Vugt says. "The group might want a young, fit and aggressive leader to defend the group — the Schwarzenegger type — but at the same time might want an older, more experienced leader — the Merkel type — to make an accurate assessment of the dangers in the situation."

Among elephants, family groups made up of a matriarch and a dozen or so of her female kin and their youngsters can stay together for decades. The oldest elephant provides leadership, but "she doesn't lead by being heavy-handed," McComb says. She may not walk at the front of the group when they commute to their morning waterhole, but the other elephants pay attention to where she goes and how she reacts.

To test for crisis leadership among elephants, McComb and her colleagues played lion calls to 39 elephant families in Kenya's Amboseli National Park. Researchers compared reactions to roars from one lion versus three lions. All the matriarchs correctly perceived that three was more worrisome than one. "It was quite a revelation" says coauthor Graeme Shannon cq, also of Sussex. Before this test, evidence had been unclear about how widespread numerical threat assessment would be. The older matriarchs managed another layer of awareness though, by judging male lions more threatening than females.

"If you remove these older individuals, you're going to have a much bigger impact than you realize because they're repositories of ecological knowledge and also of social knowledge," McComb says. Poachers, targeting the big old elephants, pose a particular menace to the species.


Image: Graeme Shannon

Video: Elephants react to what they perceive as a very dangerous lion during a test of threat assessment. (Karen McComb/Vimeo)

See Also:

115-Year-Old Medical X-Ray Machine Comes Back to Life

Posted: 16 Mar 2011 09:02 AM PDT

A team of physicists, engineers and radiologists recently revived a first-generation X-ray device that had been collecting dust in a Dutch warehouse. The antique machine still sparked and glowed like a prop in an old science fiction movie, and used thousands of times more radiation than its modern counterparts to make an image.

The old machine was originally built in 1896 by two scientists in Maastricht, the Netherlands, just weeks after German physicist Wilhelm Conrad Röntgen reported his discovery of X-rays — an achievement that won him the first-ever Nobel Prize in physics and sparked a rash of copycat experiments.

H.J. Hoffmans, a physicist and high school director in Maastricht, and L. Th. van Kleef, director of a local hospital, assembled the system from equipment already on hand at Hoffmans' high school and used it to take some of the first photographs of human bones through the skin, including in van Kleef's 21-year-old daughter's hand.

Since then, X-rays, which are the right wavelength to tunnel through muscle but are slowed by denser bones, have become almost synonymous with medical imaging. But most of those first X-ray systems were lost to history. Because the techniques and technology to measure radiation doses weren't invented until decades after the first X-ray machines came about, no one knows exactly how powerful those systems were.

"There's a gap in knowledge with respect to these old machines," said medical physicist Gerrit Kemerink of the Maastricht University Medical Center. "By the time they could measure the properties, these machines were long gone."

About a year ago, when Kemerink's colleague at the hospital dug Hoffmans and van Kleef's aging machine out of storage to use in a local TV program on the history of health care in the region, Kemerink grew curious about what the gadget could do. In a paper published online in Radiology, Kemerink reports the first-ever diagnostics on a first generation X-ray device.

"I decided to try to do some measurements on this equipment, because nobody ever did," he said.


Aside from a modern car battery and some wires, the researchers used only the original equipment, including an iron cylinder wrapped in wire to transfer electrical energy from one circuit to another and a glass bulb with metal electrodes at each end.

The glass bulb, technically called a Crookes tube, contained a tiny bit of air, about a millionth of normal air pressure. When the researchers placed a high voltage over the tube, the electrons in the gas were ripped from their atoms and zipped across the tube from one electrode to the other.

Electrons naturally emit X-rays when they speed up, slow down or change direction. When the electrons hit the glass walls of the Crookes tube, they came to a screeching halt, giving off a ghostly green glow and invisible X-rays.

An 1896 Crookes tube emitting X-rays.

The machine took some coaxing before it would glow, Kemerink said. The team fiddled with it for a solid half hour with no success.

"At the time we were thinking that it would be possible that we would not succeed with our plans," he said. "But then suddenly something happened, and we were in business."

Kemerink now thinks that the gas pressure inside the bulb was too high for the electrons to travel through the tube. But then a bit of aluminum on one of the electrodes melted, sucking gases from inside the bulb.

"It's a technique used today to improve your vacuum: Evaporate metal and trap some gases," he said. "That is what happened, although we did not do it on purpose."

Images of a hand specimen from an 86-year-old woman taken with the old X-ray machine (left) and a modern one (right). The exposure for the 1896 system took 21 minutes.

The researchers used standard hospital radiation-detecting devices to measure the amount of X-rays needed to take an image of the bones in a human hand (this time, a specimen borrowed from the anatomy department, not from a living person). The old machine took surprisingly clear pictures, but gave the skin a dose of radiation 1,500 times greater than the same image would require today. An exposure that takes 21 milliseconds (thousandths of a second) on a modern machine took up to 90 minutes on the antique system.

"It was interesting that the image quality was actually that good," said radiologist Tom Beck of Quantum Medical Metrics, a company that researches ways to get structural information from bones using medical imaging. "That was surprising."

This first-generation system did not produce enough radiation to cause health problems, although Kemerink and colleagues all stood behind a transparent lead shield whenever the machine was on, just in case. But X-ray devices got steadily more powerful shortly after Hoffmans and van Kleef built their machine, and technicians didn't always take precautions against harmful radiation.

"Within weeks, people reported skin burns, a little bit later even much worse things," like blisters and sores that wouldn't heal, Kemerink said. Some workers had to have fingers or even a whole arm amputated. "Many of these early X-ray workers developed cancer, and many of them died untimely, very young."

The difference in danger highlights how far X-rays have come, he said. In another study published online Feb. 15 in Insights into Imaging, Kemerink and colleagues showed that, with all the shielding used today, modern X-ray workers feel less radiation in the hospital than they do at home.

"There's so much to say about how far we've come," Kemerink said. "These machines when they started they were extremely dangerous. Now in all those years, they improved technology so far that you can really neglect what you are receiving when you do normal X-ray scans."

Working with the machine was "very special, I must say," Kemerink added. The air smelled of ozone, the interruptor buzzed, lightning crackled in the spark gap, and the insides of the human body showed themselves.

"Our experience with this machine," the researchers wrote, "was, even today, little less than magical."

Video: Maastricht University Medical Center. Images: Courtesy Gerrit Kemerink.

"Characteristics of a First-Generation X-Ray System." Martijn Kemerink, Tom J. Dierichs, Julien Dierichs, Hubert J.M. Huynen, Joachim E. Wildberger, Jos M.A. van Engelshoven, Gerrit J. Kemerink. Radiology, online March 16, 2011. DOI: 10.1148/radiol.11101899.
"Less radiation in a radiology department than at home." Gerrit J. Kemerink, Marij J. Frantzen, Peter de Jong and Joachim E. Wildberger. Insights into Imaging, online Feb. 15, 2011. DOI: 10.1007/s13244-011-0074-7

See Also:

Japan Struggles to Control Quake-Damaged Nuke Plant

Posted: 15 Mar 2011 10:08 AM PDT

In the aftermath of the earthquake and tsunami that struck northeastern Japan on March 11, engineers are flooding three nuclear reactors with seawater in an effort to cool their radioactive cores and to prevent all their nuclear fuel from melting down. Explosions have been recorded at two of the reactors, but do not seem to have breached the crucial inner containment vessels.

The grimmest situation is at the final reactor, where water stopped flowing temporarily March 14, exposing the fuel rather than cooling it. Much now depends on the containment vessels that shield the highly radioactive reactor cores. Even a full meltdown does not necessarily mean that the reactors will release large amounts of radioactive material — as long as the vessels remain intact.

Officials are closely monitoring several reactors at the Fukushima facility, on the northeastern coast of Japan near where the magnitude-8.9 earthquake hit. There are two clusters of reactors at Fukushima. The Daiichi cluster includes six boiling-water reactors, all of which came online in the 1970s.

In the boiling-water design, nuclear reactions in the core generate heat and cause water to boil, which makes steam to drive turbines and produce electricity. Together, the six Daiichi reactors produced 4.7 gigawatts of power before the accident.

The largest nuclear facility in the United States, the Palo Verde facility in Arizona, has a capacity of 3.7 gigawatts and serves roughly 4 million people. With 54 nuclear facilities operating before the accident, Japan is the third-largest producer of nuclear energy after France and the United States.

Most nuclear reactors use uranium as their primary fuel, although Unit 3 at Daiichi uses a mix that includes plutonium. Pellets of enriched fuel are encased inside long, narrow tubes made of an alloy containing the metal zirconium. These tubes, known as fuel rods, are spaced in an array with water flowing between them. Several hundred of these packages are then put together to create the core of the nuclear reactor.

The uranium-235 isotope, which contains 92 protons and 143 neutrons, is inherently unstable, tending to split (or fission) into lighter elements. Such spontaneous fission releases stray neutrons. When one of those neutrons hits a uranium atom, it also initiates fission into lighter elements, releasing more neutrons. Those neutrons can then go on to hit other uranium atoms in the fuel pellets, causing a chain reaction.

A reactor is said to have "gone critical" when it has this self-sustaining reaction underway in its core. As long as operators keep variables such as temperature and the flux of neutrons in hand, the fission will continue at a controlled pace.

But the reactor core requires water to cool things down and moderate the flux of neutrons coming from the fissioning uranium. Without water things can heat up quickly — both the temperature and the rate of fission within the reactor core.


According to Japan's Nuclear and Industrial Safety Agency, the earthquake knocked out power to the Daiichi facility. "Control rods" to slow the rate of fissioning dropped automatically in between the fuel rods.

Control rods are usually attached to magnets and hang above the core, and if an earthquake strikes they automatically detach, drop down and help shutter the reaction, says Ron Hart, a retired professor of nuclear engineering from Texas A&M University in College Station. The control rods absorb neutrons to prevent the reaction with uranium that causes fission. But even with the control rods in place, the reactor still produces heat at a small fraction of its full power, because of the decay products of the uranium fission.

As planned, backup diesel generators kicked in after the monster earthquake and continued to pump water in to cool the reactor cores. But when a tsunami swept across the Japanese coast about an hour later, the wave disabled the backup generators. The next backup system then kicked in: battery-powered pumps.

But the battery pumps could not keep up with the residual heat still coming from the cores of several Daiichi reactors. Excess heat caused steam to build up in the system, which operators eventually vented into the environment along with low levels of radioactive elements like cesium and iodine.

At the same time, though, hydrogen gas had apparently built up within the core, likely created by chemical reactions of the hot zirconium rods with water. The explosions at Daiichi Units 1 and 3 were likely caused by that hydrogen igniting.

Potentially far more serious is Unit 2, where pumps failed for a time on March 14, causing the water level to expose the fuel rods almost completely. If the rods melt entirely, they could drop their fuel pellets to the bottom of the reactor core. The pellets could then generate enough heat to melt through the bottom of the steel containment vessel.

"Once that happens the ability to contain the accident is greatly reduced, because the core is liquefied and spreads across the floor," says Edwin Lyman, a physicist with the Union of Concerned Scientists in Washington, D.C., a group that has long voiced concerns about the risks of nuclear power.

In the 1986 nuclear accident at Chernobyl in Ukraine, the melting core did not have the heavy shielding of a containment vessel, as the reactors in Japan do. The Chernobyl core exploded, blowing radioactive materials over large parts of western Asia and Europe and causing an ecological and public health castastrophe.

In the 1979 Three Mile Island accident in Pennsylvania, the reactor's core suffered a partial meltdown but its pressure vessel was not breached, and only low levels of radioactive material made it into the environment. The Daiichi incidents, at least so far, may be far more like Three Mile Island than like Chernobyl.

On the international scale used by experts to rank nuclear incidents, Chernobyl ranked as a "major accident" or 7, the highest on the scale. Three Mile Island was a 5, an "accident with wider consequences." Japanese officials have said they regard the Fukushima incident as a 4, an "accident with local consequences."

Operators at Daiichi have flooded all three reactors with seawater mixed with boric acid. The boron in the boric acid absorbs neutrons and helps keep them from bouncing around and triggering further fission in the fuel rods. Salts in the seawater will, however, permanently corrode the reactor cores and render them unusable in the future.

Hart says it will probably take several weeks of keeping the cores underwater to cool them enough to stop the fission completely. At that point, operators can carefully extract the cores and take them to a containment facility to assess damage, take them apart and dispose of them.

Image: DigitalGlobe [high-resolution version]

See Also:

NASA Considers Shooting Space Junk With Lasers

Posted: 15 Mar 2011 09:32 AM PDT

The growing cloud of space junk surrounding the Earth is a hazard to spaceflight, and will only get worse as large pieces of debris collide and fragment. NASA space scientists have hit on a new way to manage the mess: Use mid-powered lasers to nudge space junk off collision courses.

The U.S. military currently tracks about 20,000 pieces of junk in low-Earth orbit, most of which are discarded bits of spacecraft or debris from collisions in orbit.

The atmosphere naturally drags a portion of this refuse down to Earth every year. But in 1978, NASA astronomer Don Kessler predicted a doomsday scenario: As collisions drive up the debris, we'll hit a point where the amount of trash is growing faster than it can fall out of the sky. The Earth will end up with a permanent junk belt that could make space too dangerous to fly in, a situation now called "Kessler syndrome."

Low-Earth orbit has already seen some scary smashes and near-misses, including the collision of two communications satellites in 2009. Fragments from that collision nearly hit the International Space Station a few months later. Some models found that the runaway Kessler syndrome is probably already underway at certain orbit elevations.

"There's not a lot of argument that this is going to screw us if we don't do something," said NASA engineer Creon Levit. "Right now it's at the tipping point … and it just keeps getting worse."

In a paper submitted to Advances in Space Research and posted to the preprint server, a team led by NASA space scientist James Mason suggests a novel way to cope: Instead of dragging space junk down to Earth, just make sure the collisions stop.


"If you stop that cascade, the beauty of that is that natural atmospheric drag can take its natural course and start taking things down," said William Marshall, a space scientist at NASA and coauthor of the new study. "It gives the environment an opportunity to clean itself up."

Simply keeping new fragments from forming can make a big difference for orbital safety, Levit said. Because objects with more surface area feel more drag, the atmosphere pulls down the lightest, flattest fragments of space junk first. When big pieces of debris break up into smaller ones, the pieces become harder and harder to remove.

Worse, the pieces left behind are often the most dangerous: small, dense things like bolts.

"If one collides with a satellite or another piece of debris at the not-unreasonable relative velocity of, say 5 miles per second, it will blow it to smithereens," Levit said.

In the new study, the researchers suggest focusing a mid-powered laser through a telescope to shine on pieces of orbital debris that look like they're on a collision course. Each photon of laser light carries a tiny amount of momentum. Together, all the photons in the beam can nudge an object in space and slow it down by about .04 inches per second.

Shining the laser on bits of space litter for an hour or two a day should be enough to move the whole object by about 650 feet per day, the researchers show. That might not be enough to pull the object out of orbit altogether, but preliminary simulations suggest it could be enough to avoid more than half of all debris collisions.

NASA scientists have suggested shooting space junk with lasers before. But earlier plans relied on military-class lasers that would either destroy an object altogether, or vaporize part of its surface and create little plasma plumes that would rocket the piece of litter away. Those lasers would be prohibitively expensive, the team says, not to mention make other space-faring nations nervous about what exactly that military-grade laser is pointing at.

The laser to be used in the new system is the kind used for welding and cutting in car factories and other industrial processes. They're commercially available for about $0.8 million. The rest of the system could cost between a few and a few tens of millions of dollars, depending on whether the researchers build it from scratch or modify an existing telescope, perhaps a telescope at the Air Force Maui Optical Station in Hawaii or at Mt. Stromlo in Australia.

"This system solves technological problems, makes them cheaper, and makes it less of a threat that these will be used for nefarious things," said space security expert Brian Weeden, a technical adviser for the Secure World Foundation who was not involved in the new study. "It's certainly very interesting."

However, "I don't think this is a long-term solution," Weeden said. "It might be useful to buy some time. But I don't think it would replace the need to remove debris, or stop creating new junk."

Don Kessler, from whom the Kessler syndrome takes its name, agrees, and points out that laser light isn't forceful enough to divert the biggest pieces of junk.

"The only complete solution to is to prevent collisions involving the most massive objects in Earth orbit," he said.

Image: ESA

"Orbital Debris-Debris Collision Avoidance." James Mason, Jan Stupl, William Marshall and Creon Levit. Submitted to Advances in Space Research.

See Also:

Midway’s Albatrosses Survive the Tsunami

Posted: 15 Mar 2011 08:58 AM PDT

The famed albatrosses of Midway Atoll took a beating from the tsunami, but their population will survive, say biologists on the islands.

There are, of course, more pressing concerns in the tsunami's aftermath than wildlife, and some might balk at paying attention to birds right now. But compassion isn't a zero-sum game, and Midway Atoll is one of Earth's natural treasures: 2.4 square miles of coral ringing a deep-sea mountaintop halfway between Honolulu and Tokyo, a flyspeck of dry land that's home to several million seabirds.

Roughly two-thirds of all Laysan albatrosses live on Midway's two islands, as do one-third of all black-footed albatrosses, and about 60 people. Many of them work at the Midway Atoll National Wildlife Refuge. They had time to prepare for the tsunami, which struck late on the night of March 10. Nobody was hurt; after the waves receded, they checked on the wildlife.

An estimated 1,000 Laysan adults were killed, and tens of thousands of chicks, said Refuge official Barry Stieglitz. Those figures represent just the first wave of mortality, as adults who were at sea when the tsunami hit may be unable to find their young on returning. Chicks now wandering on shore may be doomed — but in the long run, the population as a whole will recover.


"The loss of all these chicks is horrible. It's going to represent a significant portion of this year's Laysan albatross hatch. But in terms of overall population health, the most important animals are the proven, breeding adults," said Stieglitz. "In the long term, the greatest impact would be if we lost more adults. The population should come through this just fine."

On a sadder note, however, one of the wandering chicks is the first short-tailed albatross to hatch on Midway in decades. The species was hunted to near-extinction in the 19th century, its feathers so fashionable that a population of millions was reduced to a handful of juveniles who stayed at sea during the carnage. (Young short-tailed albatrosses live in the open ocean for several years before mating.) About 3,000 of the species now survive, and a few have recently made a home on Midway.

"If the chick lost one parent, it could be in danger. If it lost both, it's definitely out of luck," Stieglitz said.

Another well-known avian denizen of Midway is Wisdom, a 60-year-old female Laysan albatross. Banded for identification in 1956, Wisdom is the oldest known wild bird. In February, she was spotted rearing a new chick.

"When I gaze at Wisdom, I feel as though I've entered a time machine," wrote U.S. Fish and Wildlife Service biologist John Klavitter in an email. "My mind races to the past and all the history she has observed through time."

Midway's Laysan albatrosses feed in waters off Alaska, flying about 50,000 miles each year as adults. Wisdom has flown between 2 and 3 million miles in her lifetime, compensating for age with smarts and efficiency. She hasn't been spotted since the tsunami, but Stieglitz said the biologists haven't looked for her yet. Wisdom's nest is on high ground. They're not too worried about her.

See Also:

Images: 1) Short-tailed albatross chick./Pete Leary, USFWS. 2) Wisdom, the 60-year-old Laysan albatross./John Klavitter, USFWS.