- Quantum Computer Simulates Hydrogen Molecule Just Right
- Evolution Shrank Some Primates’ Brains
- New Animations Take You Flying Over Mars
- NASA Sends Airborne Radar to Map Haiti Faults in 3-D
Posted: 28 Jan 2010 12:45 AM PST
Almost three decades ago, Richard Feynman — known popularly as much for his bongo drumming and pranks as for his brilliant insights into physics — told an electrified audience at MIT how to build a computer so powerful that its simulations "will do exactly the same as nature."
Not approximately, as digital computers tend to do when facing complex physical problems that must be addressed via mathematical shortcuts — such as forecasting orbits of many moons whose gravities constantly readjust their trajectories. Computer models of climate and other processes come close to nature but hardly imitate it. Feynman meant exactly, as in down to the last jot.
Now, finally, groups at Harvard and the University of Queensland in Brisbane, Australia, have designed and built a computer that hews closely to these specs. It is a quantum computer, as Feynman forecast. And it is the first quantum computer to simulate and calculate the behavior of a molecular, quantum system.
Much has been written about how such computers would be paragons of calculating power should anybody learn to build one that is much more than a toy. And this latest one is at the toy stage, too. But it is just the thing for solving some of the most vexing problems in science, the ones that Feynman had in mind when he said "nature" — those problems involving quantum mechanics itself, the system of physical laws governing the atomic scale. Inherent to quantum mechanics are seeming paradoxes that blur the distinctions between particles and waves, portray all events as matters of probability rather than deterministic destiny, and under which a given particle can exist in a state of ambiguity that makes it potentially two or more things, or in two or more places, at once.
Reporting online January 10 in Nature Chemistry, the Harvard group, led by chemist Alán Aspuru-Guzik, developed the conceptual algorithm and schematic that defined the computer's architecture. Aspuru-Guzik has been working on such things for years but didn't have the hardware to test his ideas. At the University of Queensland, physicist Andrew G. White and his team, who have been working on such sophisticated gadgets, said they thought they could make one to the Harvard specs and, after some collaboration, did so. In principle the computer could have been rather small, "about the size of a fingernail," White says. But his group spread its components across a square meter of lab space to make it easier to adjust and program.
Within its filters and polarizers and beam splitters, just two photons at a time traveled simultaneously, their particle-like yet wavelike natures playing peek-a-boo in clouds of probability just as quantum mechanics says they should.
Quantum computing's power stems from the curiosity that a qubit — a bit of quantum information — is not limited to holding a single discrete binary number, 1 or 0, as is the bit of standard computing. Qubits exist in a limbo of uncertainty, simultaneously 1 and 0. Until the computation is done and a detector measures the value, that very ambiguity allows greater speed and flexibility as a quantum computer searches multiple permutations at once for a final result.
Plus, not only do the photons have this mix of quantum identities, a state formally called superposition, they are also entangled. Entanglement is another feature of quantum mechanics in which the properties of two or more superposed particles are correlated with one another. It is the superposition of superpositions, in which the state of one is connected to the state of the other despite the particles' separation in distance. Entanglement further increases the ability of a quantum computer to explore simultaneously all possible solutions to a complex problem.
But with just two photons as its qubits, the new quantum computer could not tackle quantum behavior involving more than two objects. So, the researchers asked it to calculate the energy levels of the hydrogen molecule, the simplest one known. Other methods have long revealed the answer, providing a check on the accuracy of doing it with qubits. Corresponding to the two wavelike photons rattling fuzzily along in the computer, the hydrogen molecule has two wavelike electrons chemically binding its two nuclei — each a single proton.
Led by first author to the paper Benjamin Lanyon, who is now at the University of Innsbruck in Austria, the Queensland team programmed the equations that govern how electrons behave near protons into the machine by tweaking the arrangement of filters, wavelength shifters and other optical components in the computer. Each such piece of optical hardware corresponded to the logic gates that add, subtract, integrate and otherwise manipulate binary data in a standard computer. The researchers then entered initial "data" corresponding to the distance between the molecule's nuclei — a driver of what energies the electrons might be able to take on when the molecule is excited by an outside influence.
The photons are each given a precise angle of polarization — the orientation of the electric and magnetic components of their fields —and for one of the photons the angle was chosen to correspond to that datum. On the first run of a calculation, the second photon then shared this datum via its entanglement with the first and, going at the speed of light, emerged from the machine with the first digit of the answer. In an iteration process, that digit was then used as data for another run, producing the second digit — a process followed for 20 rounds.
By following — some would say simulating — the same weird physics as do the electrons of atomic bonds themselves, the computer's photons got the permitted energy correct to within six parts per million.
"Every time you add an electron or other object to a quantum problem, the complexity of the problem doubles," says James Whitfield, a graduate student at Harvard and second author on the paper. "The great thing," he added, "is that every time you add a qubit to the computer, its power doubles too." In formal language, the power of a quantum computer scales exponentially with its size (as in number of qubits) in exact step with the size of quantum problems. In fact, says his professor, Aspuru-Guzik, a computer of "only" 150 qubits or so would have more computing power than all the supercomputers in the world today, combined.
Whitfield is near completion of his studies to be a theoretical chemist. A goal is, eventually, to be able to calculate the energy levels and reaction levels of complex molecules with scores or even hundreds of electrons binding them together. Even in problems with just four or five electrons, the challenge of computation by standard means has grown so exponentially fast that standard computers cannot handle it.
The work is "great, a proof of principle, more evidence that this stuff is not pie in the sky or cannot be built," says a University of California, Berkeley chemistry professor, Bergitta Whaley. "It is the first time that a quantum computer has been used to calculate a molecular energy level." And while most of the publicity for quantum computers has marveled at the potential power to break immense numbers into their factors — a key to breaking secret codes and thus a possibility with national security implication — "this has major implications for practical uses with very broad application," Whaley says. These uses might include the ability, without trial and error, to design complex chemical systems and advanced materials with properties never before seen.
Scaling it up to five, 10 or hundreds of qubits will not be easy. In the end, photons as qubits are unlikely because of the difficulty of entangling and monitoring so many of them. Electrons, simulated atoms called quantum dots, ionized atoms or other such particles may eventually form the blurry hearts of quantum computers. How long from now? "I'd say less than 50 years, but more than 10," says White.
In a striking bit of symmetry to go with using a quantum computer to solve a quantum problem, the latest work resonates with Feynman's original idea in another way. At that talk at MIT — published in 1982 in the International Journal of Physics — Feynman not only suggested the basis for such a computer, he also drew a little picture of one. It included two little blocks of the semi-transparent mineral calcite to control and measure the photons' polarizations. Looking at the diagram of the device built recently by the Queensland team reveals, sure enough, two "calcite beam displacers." Whatever shade of Richard Feynman flickers still in the entanglements of the universe, and were it made to collapse into something corporeal, perhaps it would be smiling.
Image: Benjamin Lanyon
Posted: 27 Jan 2010 04:00 PM PST
Primate brains have not always gotten bigger as they evolved, according to new research. The findings challenge the controversial argument that Homo floresiensis, also known as the hobbit, had a tiny, chimp-sized brain because of disease.
"It was assumed that brain sizes generally get bigger through primate evolution," said Nick Mundy, a Cambridge University evolutionary geneticist and lead author of the study.While that may be true for most primates, "we find very strong evidence in several lineages that brain sizes actually have gotten smaller."
The brains of marmosets, mouse lemurs and mangabeys have shrunk significantly. The brain of the mouse lemur, a teacup-sized, nocturnal primate found in Madagascar, is 27 percentsmaller than that ofthe common ancestor of all lemurs, Mundy said.
The paper,which appearsJan. 27in Biomed Central, analyzed brain size and body mass from 37current and 23 extinct primate species and used three different models to reconstruct how the brain evolved.
Though its not clear why smaller brains would be advantageous to some species, the brain's voracious energy consumption may haveplayed a role, Mundy speculated. If food was scarce, it may have been better to sacrifice intelligenceto use less energy.
The findings are more fodder for the debate about the mysteriousH. floresiensis, a 3-foot-tall hominiddiscovered in a cave on the Indonesian island of Flores in 2003. Some have argued these "hobbits" were a distinct species, while others say they were simply stunted, sickly Homo sapiens.
Inthe second lineof reasoning, the hominids may have suffered from cretinism, a pituitary gland disease that leads to stunted growth and small brains. Part of this camp's argument was thatthe hobbits' miniscule brain was too small to make evolutionary sense, Mundy said.
"We've just applied the reduction in brain size that we see across the rest of the primate phylogeny to the case of the Flores man," he said. "Under reasonable assumptions, it does look plausible that this brain-size massive reduction could have occurred."
Some scientistsargue that there's no need to rely on either evolutionary brain shrinkage or pathology to account for the short stature of the hobbits.
"Arguments for H. flo being somehow pathological (one syndrome or another) have been totally refuted," Peter Brown wrote in an e-mail. Brown, a paleoanthropologist at the University of New England in New South Wales, Australia, first discovered the hobbit skeletons.
What's more, evidence suggests the diminutive island dwellers left Africa around 1.8 million years ago, and "probably arrived on Flores already small-brained and small-bodied," he wrote. In addition, their skeletal and dental features most resemble the tiny-brainedAustralopithecus or Homo habilis. So, the brain ofH. floresiensis could have started out small and stayed that way, rather than shrinking through evolution.
Images: 1) Pygmy marmoset. jwm_angrymonkey/flickr
Citation: "Reconstructing the ups and downs of primate brain evolution: Implications foradaptive hypotheses and Homo floresiensis," Stephen H Montgomery,Isabella Capellini ,Robert A Barton,Nicholas I Mundy, BMC Biology, 27 January 2010.
Posted: 27 Jan 2010 10:49 AM PST
A space-loving animator has created stunning flyovers of Mars from data captured by NASA's HiRISE imager, which is mounted on the Mars Reconnaissance Orbiter satellite.
HiRISE creates detailed digital-elevation models. Crunch that data, add perspective and some cinematic effects, and you have the movies that Doug Ellison, founder of UnmannedSpaceflight.com, posted to YouTube this morning.
The video at the top shows the Mojave Crater. The one below takes you flying through Athabasca Valles. Ellison said that both animations are rendered accurately from the data with no exaggerated scaling.
Via Nancy Atkinson at Universe Today
Videos: Doug Ellison.
Posted: 27 Jan 2010 10:31 AM PST
NASA is sending a radar-equippedjet to Haiti to make 3-D maps of the deformation caused by the magnitude 7 earthquake on Jan. 21 and multiple aftershocks that continue to occur.
The Uninhabited Aerial Vehicle Synthetic Aperture Radar, or UAVSAR, was already scheduled to head to South America aboard a modified Gulfstream III to study volcanoes, forests and Mayan ruins. NASA added the island of Hisapaniola to the itinerary to help study faults in both Haiti and the Dominican Republic.
"UAVSAR will allow us to image deformations of Earth's surface and other changes associated with post-Haiti earthquake geologic processes, such as aftershocks, earthquakes that might be triggered by the main earthquake farther down the fault line, and the potential for landslides," JPL's Paul Lundgren, the principal investigator for the Hispaniola overflights, said in a press release Wednesday.
"Because of Hispaniola's complex tectonic setting, there is an interest in determining if the earthquake in Haiti might trigger other earthquakes at some unknown point in the future," Lundgren said, "either along adjacent sections of the Enriquillo-Plantain Garden fault that was responsible for the main earthquake, or on other faults in northern Hispaniola, such as the Septentrional fault."
The UAVSAR, whichleft NASA's Dryden Flight Research Center in Edwards, Calif., on Jan. 25, will flyover Hispaniola multiple times this week and again inearlyFebruary.
Since November 2009, the radar has been mapping the San Andreas and other major faults in California. The 3-D data will help scientists better understand the state's seismic risk.
UAVSAR works by sending microwaves to the ground from a pod under the aircraft flying at about 41,000 feet and recording the return signal. The differences in the times it takes waves to return from points on the ground to the plane gives information about the topography. By hitting the same target from different angles as the plane flighs overhead, a 3-D image can be made. Very precise details about ground motion can be calculated by flying over the same area later, giving scientists information about strain buildup on a fault.
The Hispaniola data will be made public in a few weeks. The Dominican Republic flyovers could help scientists understand future earthquakes on the Septrional fault.
Images: 1)NASA. 2) Dave Bullock/Wired.com
|You are subscribed to email updates from Johnus Morphopalus's Facebook notes |
To stop receiving these emails, you may unsubscribe now.
|Email delivery powered by Google|
|Google Inc., 20 West Kinzie, Chicago IL USA 60610|