Art manager

Astronomers risk misinterpreting planetary signals in James Webb data

NASA’s James Webb Space Telescope reveals the universe with spectacular and unprecedented clarity. The observatory’s razor-sharp infrared vision cut through cosmic dust to illuminate some of the universe’s earliest structures, as well as previously obscured stellar nurseries and spinning galaxies hundreds of millions of light-years away.

In addition to seeing further into the universe than ever before, Webb will capture the most comprehensive view of objects in our own galaxy, namely some of the 5,000 planets that have been discovered in the Milky Way. Astronomers are harnessing the telescope’s light-analyzing precision to decode the atmospheres surrounding some of these nearby worlds. The properties of their atmospheres could give clues to how a planet formed and whether it harbors signs of life.

But a new study from MIT suggests that the tools astronomers typically use to decode light-based signals may not be good enough to accurately interpret data from the new telescope. Specifically, opacity models — tools that model how light interacts with matter based on material properties — may need significant readjustment in order to match the accuracy of Webb’s data, the researchers say.

If these models are not refined? The researchers predict that the properties of planetary atmospheres, such as their temperature, pressure and elemental composition, could be an order of magnitude different.

“There is a scientifically significant difference between a compound like water present at 5% versus 25%, which current models cannot differentiate,” says Julien de Wit, study co-lead, assistant professor in the Department of Earth, Atmospheric, and Planetary Sciences from MIT. (APSS).

“Currently, the model we use to decipher the spectral information is not up to par with the accuracy and quality of data we have from the James Webb Telescope,” adds EAPS graduate student Prajwal Niraula. “We have to improve our game and tackle the problem of opacity together.”

De Wit, Niraula and their colleagues published their study in natural astronomy. Co-authors include spectroscopy experts Yuli Gordon, Robert Hargreaves, Clara Sousa-Silva and Roman Kochanov from the Harvard-Smithsonian Center for Astrophysics.

Upgrade

Opacity is a measure of how easily photons pass through a material. Photons of certain wavelengths can pass directly through a material, be absorbed or reflected depending on whether and how they interact with certain molecules within a material. This interaction also depends on the temperature and pressure of a material.

An opacity model works based on various assumptions about how light interacts with matter. Astronomers use opacity models to derive certain properties of a material, given the spectrum of light emitted by the material. In the context of explanets, an opacity model can decode the type and abundance of chemicals in a planet’s atmosphere, based on the planet’s light captured by a telescope.

De Wit says the current state-of-the-art opacity model, which he likens to a classic language translation tool, has done a decent job of decoding spectral data taken by instruments such as those on the Hubble Space Telescope.

“So far, this Rosetta Stone is doing well,” says de Wit. “But now that we’re taking Webb’s precision to the next level, our translation process will prevent us from capturing important subtleties, such as those that make the difference between a habitable planet or not.”

Light, disturbed

He and his colleagues make this point in their study, in which they put the most commonly used model of opacity to the test. The team set out to see what atmospheric properties the model would derive if modified to assume certain limitations in our understanding of how light and matter interact. The researchers created eight of these “disturbed” models. They then fed each model, including the real-life version, “synthetic spectra” – patterns of light simulated by the group and similar to the accuracy the James Webb Telescope would see.

They found that, based on the same light spectra, each perturbed model produced large-scale predictions for the properties of a planet’s atmosphere. Based on their analysis, the team concludes that, if existing opacity models are applied to light spectra taken by the Webb telescope, they will hit a “precision wall”. In other words, they will not be sensitive enough to tell if a planet has an atmospheric temperature of 300 Kelvin or 600 Kelvin, or if a certain gas occupies 5% or 25% of an atmospheric layer.

“This difference is important for allowing us to constrain planetary formation mechanisms and reliably identify biosignatures,” says Niraula.

The team also found that each model also produced a “good fit” with the data, meaning that even if a disturbed model produced a chemical composition that the researchers knew was incorrect, it also generated a light spectrum from that composition. chemical that was close. enough to, or “fit” the original spectrum.

“We found that there were enough parameters to change, even with a bad model, to get a good fit, meaning you wouldn’t know your model is wrong and what it’s telling you is wrong. “, explains de Wit.

He and his colleagues raised some ideas about how to improve existing opacity models, including the need for more laboratory measurements and theoretical calculations to refine model assumptions about how light and various molecules interact, as well as collaborations between disciplines, and in particular, between astronomy and spectroscopy.

“There are so many things that could be done if we fully understood how light and matter interact,” Niraula says. “We know that pretty well around Earth conditions, but as we move to different types of atmospheres, things change, and that’s a lot of data, of increasing quality, that we risk misinterpreting.”

– This press release was originally posted on the Massachusetts Institute of Technology website