IEEE websites place cookies on your device to give you the best user experience. By using our websites, you agree to the placement of these cookies. To learn more, read our Privacy Policy.
Solar panels glimmering in the sun are an icon of all that is green. But while generating electricity through photovoltaics is indeed better for the environment than burning fossil fuels, several incidents have linked the manufacture of these shining symbols of environmental virtue to a trail of chemical pollution. And it turns out that the time it takes to compensate for the energy used and the greenhouse gases emitted in photovoltaic panel production varies substantially by technology and geography.
That's the bad news. The good news is that the industry could readily eliminate many of the damaging side effects that do exist. Indeed, pressure for it to do so is mounting, in part because, since 2008, photovoltaics manufacturing has moved from Europe, Japan, and the United States to China, Malaysia, the Philippines, and Taiwan; today nearly half the world's photovoltaics are manufactured in China. As a result, although the overall track record for the industry is good, the countries that produce the most photovoltaics today typically do the worst job of protecting the environment and their workers.
To understand exactly what the problems are, and how they might be addressed, it's helpful to know a little something about how photovoltaic panels are made. While solar energy can be generated using a variety of technologies, the vast majority of solar cells today start as quartz, the most common form of silica (silicon dioxide), which is refined into elemental silicon. There's the first problem: The quartz is extracted from mines, putting the miners at risk of one of civilization's oldest occupational hazards, the lung disease silicosis.
The initial refining turns quartz into metallurgical-grade silicon, a substance used mostly to harden steel and other metals. That happens in giant furnaces, and keeping them hot takes a lot of energy, a subject we'll return to later. Fortunately, the levels of the resulting emissions—mostly carbon dioxide and sulfur dioxide—can't do much harm to the people working at silicon refineries or to the immediate environment.
The next step, however—turning metallurgical-grade silicon into a purer form called polysilicon—creates the very toxic compound silicon tetrachloride. The refinement process involves combining hydrochloric acid with metallurgical-grade silicon to turn it into what are called trichlorosilanes. The trichlorosilanes then react with added hydrogen, producing polysilicon along with liquid silicon tetrachloride—three or four tons of silicon tetrachloride for every ton of polysilicon.
Most manufacturers recycle this waste to make more polysilicon. Capturing silicon from silicon tetrachloride requires less energy than obtaining it from raw silica, so recycling this waste can save manufacturers money. But the reprocessing equipment can cost tens of millions of dollars. So some operations have just thrown away the by-product. If exposed to water—and that's hard to prevent if it's casually dumped—the silicon tetrachloride releases hydrochloric acid, acidifying the soil and emitting harmful fumes.
When the photovoltaics industry was smaller, the solar-cell manufacturers got their silicon from chipmakers, which rejected wafers that did not meet the computer industry's purity requirements. But the boom in photovoltaics demanded more than semiconductor-industry leftovers, and many new polysilicon refineries were built in China. Few countries at the time had stringent rules covering the storage and disposal of silicon tetrachloride waste, and China was no exception, as some Washington Post reporters discovered.
The paper's investigation, published in March 2008, profiled a Chinese polysilicon facility owned by Luoyang Zhonggui High-Technology Co., located near the Yellow River in the country's Henan province. This facility supplied polysilicon to Suntech Power Holdings, at the time the world's largest solar-cell manufacturer, as well as to several other high-profile photovoltaics companies.
The reporters found that the company was dumping silicon tetrachloride waste on neighboring fields instead of investing in equipment that could reprocess it, rendering those fields useless for growing crops and inflaming the eyes and throats of nearby residents. And the article suggested that the company was not alone in this practice.
After the publication of the Washington Post story, solar companies' stock prices fell. Investors feared the revelations would undermine an industry that relies so much on its green credentials. After all, that's what attracts most customers and draws public support for policies that foster the adoption of solar energy, such as the Residential Renewable Energy Tax Credit in the United States. Those who purchase residential solar systems can subtract 30 percent of the cost from their tax bills until the incentive expires in 2016.
To protect the industry's reputation, the manufacturers of photovoltaic panels began to inquire about the environmental practices of their polysilicon suppliers. Consequently, the situation is now improving. In 2011 China set standards requiring that companies recycle at least 98.5 percent of their silicon tetrachloride waste. These standards are easy to meet so long as factories install the proper equipment. Yet it remains to be seen how well the rules are being enforced.
This problem could completely go away in the future. Researchers at the National Renewable Energy Laboratory in Golden, Colo., are looking for ways to make polysilicon with ethanol instead of chlorine-based chemicals, thereby avoiding the creation of silicon tetrachloride altogether.
The struggle to keep photovoltaics green does not end with the production of polysilicon. Solar-cell manufacturers purify chunks of polysilicon to form bricklike ingots and then slice the ingots into wafers. Then they introduce impurities into the silicon wafers, creating the essential solar-cell architecture that produces the photovoltaic effect.
These steps all involve hazardous chemicals. For example, manufacturers rely on hydrofluoric acid to clean the wafers, remove damage that comes from sawing, and texture the surface to better collect light. Hydrofluoric acid works great for all these things, but when it touches an unprotected person, this highly corrosive liquid can destroy tissue and decalcify bones. So handling hydrofluoric acid requires extreme care, and it must be disposed of properly.
But accidents do happen and are more likely in places that have limited experience manufacturing semiconductors or that have lax environmental regulations. In August 2011, a factory in China's Zhejiang province owned by Jinko Solar Holding Co., one of the largest photovoltaic companies in the world, spilled hydrofluoric acid into the nearby Mujiaqiao River, killing hundreds of fish. And farmers working adjacent lands, who used the contaminated water to clean their animals, accidently killed dozens of pigs.
In investigating the dead pigs, Chinese authorities found levels of hydrofluoric acid in the river 10 times the permitted limit, and they presumably took these measurements long after much of the hydrofluoric acid had washed downstream. Hundreds of local residents, upset over the incident, stormed and temporarily occupied the manufacturing facility. Again, investors reacted: When major media outlets carried the news the next day, Jinko's stock price dropped by more than 40 percent, translating to nearly US $100 million in lost value.
This threat to the environment needn't continue. Researchers at Rohm & Haas Electronic Materials, a subsidiary of Dow Chemical, have identified substitutes for the hydrofluoric acid used in solar-cell manufacture. One good candidate is sodium hydroxide (NaOH). Although NaOH is itself a caustic chemical, it is easier to treat and dispose of than hydrofluoric acid and is less risky for workers. It is also easier to treat wastewater containing NaOH.
Although more than 90 percent of photovoltaic panels made today start with polysilicon, there is a newer approach: thin-film solar-cell technology. The thin-film varieties will likely grow in market share over the next decade, because they can be just as efficient as silicon-based solar cells and yet cheaper to manufacture, as they use less energy and material.
Makers of thin-film cells deposit layers of semiconductor material directly on a substrate of glass, metal, or plastic instead of slicing wafers from a silicon ingot. This produces less waste and completely avoids the complicated melting, drawing, and slicing used to make traditional cells. In essence, a piece of glass goes in at one end of the factory and a fully functional photovoltaic module emerges from the other.
Moving to thin-film solar cells eliminates many of the environmental and safety hazards from manufacturing, because there's no need for certain problematic chemicals—no hydrofluoric acid, no hydrochloric acid. But that does not mean you can automatically stamp a thin-film solar cell as green.
Today's dominant thin-film technologies are cadmium telluride and a more recent competitor, copper indium gallium selenide (CIGS). In the former, one semiconductor layer is made of cadmium telluride; the second is cadmium sulfide. In the latter, the primary semiconductor material is CIGS, but the second layer is typically cadmium sulfide. So each of these technologies uses compounds containing the heavy metal cadmium, which is both a carcinogen and a genotoxin, meaning that it can cause inheritable mutations.
Manufacturers like First Solar, based in Tempe, Ariz., have a strong record of protecting workers from cadmium exposures during manufacture. But there is little information about exposures to workers involved with cadmium at earlier stages in the life cycle of the metal, from the zinc mines where much of cadmium originates through the smelting process that purifies cadmium and turns it into semiconductor materials. Exposures after solar panels are discarded are also a concern. Most of the cadmium telluride that manufacturers dispose of due to damage or manufacturing defects is recycled under safe, controlled conditions. On the postconsumer end of the equation, the industry proactively set up a solar-panel collection and recycling scheme in Europe. Individual companies have also established recycling programs, such as First Solar's prefunded take-back system. But more needs to be done; not every consumer has access to a free take-back program, and indeed many consumers may not even be aware of the need to dispose of panels responsibly.
The best way to avoid exposing workers and the environment to toxic cadmium is to minimize the amount used or to use no cadmium at all. Already, two major CIGS-photovoltaic manufacturers—Avancis and Solar Frontier—are using zinc sulfide, a relatively benign material, instead of cadmium sulfide. And researchers from the University of Bristol and the University of Bath, in England; the University of California, Berkeley; and many other academic and government laboratories are trying to develop thin-film photovoltaics that do not require toxic elements like cadmium or rare elements like tellurium. First Solar has meanwhile been steadily reducing the amount of cadmium used in its solar cells.
Toxicity isn't the only concern. Making solar cells requires a lot of energy. Fortunately, because these cells generate electricity, they pay back the original investment of energy; most do so after just two years of operation, and some companies report payback times as short as six months. This “energy payback" time is not the same as the time needed to recoup a consumers financial investment in solar panels; it measures investments and payback times in terms of kilowatt-hours, not in terms of money.
Analysts also judge the impact of the energy used to make a solar panel by the amount of carbon generated in the production of that energy—a number that can vary widely. To do this, we give the energy a carbon-intensity value, usually represented as kilograms of CO2 emitted per kilowatt-hour generated. Places that depend largely on coal have the most carbon-intense electricity in the world: Chinese electricity is a good example, having roughly twice the carbon intensity of U.S. electricity. This fits with the results of researchers in Illinois at Argonne National Laboratory and Northwestern University. In a report published this past June, they found that the carbon footprint of photovoltaic panels made in China is indeed about double that of those manufactured in Europe.
If the photovoltaic panels made in China were installed in China, the high carbon intensity of the energy used and that of the energy saved would cancel each other out, and the time needed to counterbalance greenhouse-gas emissions during manufacture would be the same as the energy-payback time. But that's not what's been happening lately. The manufacturing is mostly located in China, and the panels are often installed in Europe or the United States. At double the carbon intensity, it takes twice as long to compensate for the greenhouse-gas emissions as it does to pay back the energy investments.
Of course, if you manufacture photovoltaic panels with low-carbon electricity (for example, in a solar-powered factory) and install them in a high-carbon-intensity country, the greenhouse-gas-payback time will be lower than the energy-payback time. So perhaps someday, powering photovoltaic-panel manufacturing with wind, solar, and geothermal energy will end concerns about the carbon footprint of photovoltaics.
Water is yet another issue. Photovoltaic manufacturers use a lot of it for various purposes, including cooling, chemical processing, and air-pollution control. The biggest water waster, though, is cleaning during installation and use. Utility-scale projects in the 230- to 550-megawatt range can require up to 1.5 billion liters of water for dust control during construction and another 26 million liters annually for panel washing during operation. However, the amount of water used to produce, install, and operate photovoltaic panels is significantly lower than that needed to cool thermoelectric fossil- and fissile-power plants.
The choices investors and consumers make could, in principle, have a big influence on photovoltaic manufacturers' practices. But it's often tough to tell how these companies differ in the care they take to protect the environment. The solar industry has no formal ecolabel, like the Energy Star labels on household appliances and consumer electronics that help U.S. buyers identify energy-efficient products. And most people do not go out and purchase solar panels themselves. They hire third-party installers. So even if there were an ecolabeling scheme, it would depend on installers' willingness to choose ecofriendly products.
For now, consumers can help push manufacturers to improve their environmental and safety records by asking installers about the companies making the products they use. This, in turn, would prompt installers to ask the manufacturers for more information.
Researchers at the National Photovoltaics Environmental Research Center at Brookhaven National Laboratory in Upton, N.Y., have long been publishing studies about the possible environmental hazards of photovoltaics. Recently, formal environmental performance ratings for the solar industry have started to emerge.
Organizations such as the Center for International Earth Science Information Network are trying to establish some means of determining the environmental, health, and safety performance of manufacturers in developing countries. This group, which includes researchers from Yale and Columbia, is proposing the China Environmental Performance Index, which would operate at the provincial level to help China track progress toward environmental-policy goals.
Meanwhile, the Solar Energy Industries Association, a U.S. national trade organization, has proposed new industry guidelines in a document called the “Solar Industry Environment & Social Responsibility Commitment," aimed at preventing occupational injury and illness, preventing pollution, and reducing the natural resources used in production. The document urges companies to ask suppliers to report on manufacturing practices and any chemical and greenhouse-gas emissions.
In addition, the Silicon Valley Toxics Coalition, which rates the environmental performance of electronics companies, has surveyed and ranked photovoltaic manufacturing companies based or operating in China, Germany, Malaysia, the Philippines, and the United States. Participation is voluntary and so far includes such major manufacturers as First Solar, SolarWorld, SunPower, Suntech, Trina, and Yingli; Chinese manufacturers Trina and Yingli have consistently ranked among the world's top three most environmentally responsible companies. And Sharp, SolarWorld, and SunPower have been carefully tracking the greenhouse gases emitted and chemicals used in the manufacture of their solar panels for several years.
Such initiatives are coming none too soon. Many people today view photovoltaics as a panacea for our energy woes, given how dirty most of the alternatives are. But that does not mean we should turn a blind eye to the darker side of this technology. Indeed, we need to consider it very carefully. And just maybe, with a sustained effort by consumers, manufacturers, and researchers, the photovoltaics industry will one day be truly, not just symbolically, green.
This article originally appeared in print as “Solar's Green Dilemma."
This article was updated 12 November 2014.
Dustin Mulvaney is an assistant professor of environmental studies at San Jose State University, in California, where he concentrates on the solar-energy, biofuel, and natural-gas industries. Although he identifies himself as both a solar advocate and a solar user—he has a photovoltaic array in his yard—his research has made him mindful of the significant health risks and environmental costs of manufacturing PV panels.
Extraordinarily thin sheets in ferroelectric crystals may lead to flexible, adaptable electronics
Charles Q. Choi is a science reporter who contributes regularly to IEEE Spectrum. He has written for Scientific American, The New York Times, Wired, and Science, among others.
(Top left) Piezoresponse force microscopy images of ferroelectric domains in lithium niobate. (Bottom left) Conducting atomic force microscopy images of ferroelectric domains in lithium niobate. (Top right) Piezoresponse force microscopy images of a lithium niobate thin film. (Bottom right) Cross-sectional high-angle annular dark-field scanning transmission electron microscopy image of ferroelectric domains in lithium niobate.
Atomically thin materials such as graphene have drawn attention for how electrons can race at exceptionally quick speeds in them, leading to visions of advanced new electronics. Now scientists find that similar behavior can exist within two-dimensional sheets known as domain walls that are embedded within unusual crystalline materials. Moreover, unlike other atomically thin sheets, domain walls can easily be created, moved and destroyed, which may lead the way for novel circuits that can instantly transform or get repaired on command.
In the new study, researchers investigated crystalline lithium niobate ferroelectric film just 500 nanometers thick. Electric charges within materials separate into positive and negative poles, and ferroelectrics are materials in which these electric dipoles are generally oriented in the same direction.
The electric dipoles in ferroelectrics are clustered in regions known as domains. These are separated by two-dimensional layers known as domain walls.
The amazing electronic properties of two-dimensional materials such as graphene and molybdenum disulfide have led researchers to hope they may allow Moore's Law to continue once it becomes impossible to make further progress using silicon. Researchers have also investigated similarly attractive behavior in exceptionally thin electrically conducting heterointerfaces between two different insulating materials, such as lanthanum aluminate and strontium titanate.
Domain walls are essentially homointerfaces between chemically identical regions of the same material. However, unlike any other 2-D electronic material, applied electric or magnetic fields can readily create, move and annihilate domain walls inside materials.
This unique quality of domain walls may potentially lead to novel "domain wall electronics" far more flexible and adaptable than current devices that rely on static components. One might imagine entire circuits "created in one instant, for one purpose, only to be wiped clean and rewritten in a different form, for a different purpose, in the next instant," says study lead author Conor McCluskey, a physicist at Queen's University Belfast in the United Kingdom. "Malleable domain wall network architecture that can continually metamorphose could represent a kind of technological genie, granting wishes on demand for radical moment-to-moment changes in electronic function."
However, scientists have found it difficult to examine domain walls in detail. The fact that domain walls are both very thin and buried under the surfaces of crystals makes them less easy to analyze "than regular 3D or even 2D materials," McCluskey says.
In the new study, McCluskey and his colleagues focused on how the domain walls in the crystals they were investigating are shaped like cones. This geometry let them analyze the behavior of the domain walls using a relatively simple probe design.
"Malleable domain wall network architecture that can continually metamorphose could represent a kind of technological genie, granting wishes on demand for radical moment-to-moment changes in electronic function."—Conor McCluskey
The scientists found that electric charge mobility was exceptionally fast at room temperature on average. These speeds may be "the highest room-temperature value in any oxide" and "at least comparable to that seen in graphene," McCluskey says. They detailed their findings in the 11 August issue of the journal Advanced Materials.
Precise knowledge of such parameters "are needed for envisioning and building devices that work reliably," McCluskey says. "The dream is that it could allow completely malleable or ephemeral nanocircuitry to be created, destroyed and reformed from one moment to the next."
One promising application for domain walls may be brain-mimicking neuromorphic computing, with neuromorphic devices playing the role of the synapses that link neurons together, McCluskey says.
"The brain works by forging pathways which have some memory about their history: if a particular synaptic pathway is used more frequently, it becomes stronger, making it easier for this pathway to be used in the future. The brain learns by forging these stronger pathways," McCluskey says. "Some domain wall systems can behave in the same way: if you apply a small voltage to walls in our particular system, they tilt and change slightly, increasing their conductivity and giving a higher current. The next pulse will produce a higher current, and so on and so on, as if they have some memory of their past."
If domain walls can play the role of artificial synapses, "this could pave the way to a low-heat-production, low-power-consumption brain-like architecture for neuromorphic computing," he adds.
However, although reconfigurable electronics based on domain walls are a tantalizing idea, McCluskey notes that in many ferroelectrics, the domain walls conduct only marginally better than the rest of the material, and so they will likely not help support viable devices.
"This isn't a problem for the system we have investigated, lithium niobate, as it has quite an astonishing ratio between the conductivity of the domain walls and the bulk material," McCluskey says. However, lithium niobate does currently require large voltages to manipulate domain walls. Scaling these systems down in thickness for use with everyday voltages "is one major hurdle," he notes. "We are working on it."
Future experiments will explore why electric charge mobility is so fast in domain walls. "Broadly speaking, the carrier mobility relies on two things—the number of times the charge carrier will scatter or bump into something on its journey through the material, and the so-called 'effective mass' with which the carrier moves," McCluskey says.
Electrons can deflect off defects in materials, as well as vibrations known as phonons. "It is possible the presence of a domain wall alters the defect or phonon concentrations locally, resulting in fewer scattering centers along the domain wall," McCluskey says.
When it comes to the effective mass of a charge carrier such as an electron, "when we consider an electron moving through a crystal lattice, we need to consider it not as a free electron, such as one in vacuum, but as an electron moving through the solid crystalline environment," McCluskey explains. "The electron feels the effect of the nearby atoms as it progresses, changing its energy as it moves closer or further away from any given atom." This can essentially make an electron moving in a crystal lighter or heavier than a normal electron. The way in which domain walls disturb crystal lattices may in turn alter its effective mass, he says.
"Without further experiment, it's impossible to say which of these contributions is more responsible for determining the carrier mobility in our system," McCluskey says. ""We hope that our study prompts a shift in focus towards characterizing the transport in domain wall systems, which may be every bit as exciting as some of the other 2D functional materials systems at the forefront of research today."
Scientist Scott Acton on optimizing the wavefront sensing and control of the James Webb Space Telescope
Ned Potter, a writer from New York, spent more than 25 years as an ABC News and CBS News correspondent covering science, technology, space, and the environment.
The James Webb Space Telescope, in just a few months of operation, has begun to change our view of the universe. Its images—more detailed than what was possible before—show space aglow with galaxies, some of them formed very soon after the big bang.
None of this would be possible without the work of a team led by Scott Acton, the lead wavefront sensing and control scientist for the Webb at Ball Aerospace & Technologies in Colorado. He and his colleagues developedthe systems that align the 18 separate segments of the Webb’s primary mirror with its smaller secondary mirror and science instruments. To produce clear images in the infrared wavelengths the telescope uses, the segments have to be within tens of nanometers of the shape specified in the spacecraft design.
Acton grew up in Wyoming and spent more than 20 years on the Webb team. IEEE Spectrum spoke with Acton after his team had finished aligning the telescope’s optics in space. This transcript has been edited for clarity and brevity.
Tell your story. What got you started?
Scott Acton: When I was seven-years-old, my dad brought home a new television. And he gave me the old television to take apart. I was just enthralled by what I saw inside this television. And from that moment on I was defined by electronics. You look inside an old television and there are mechanisms, there are smells and colors and sights and for a seven-year-old kid, it was just the most amazing thing I’d ever seen.
Fast-forward 25 years and I’m working in the field of adaptive optics. And eventually that led to wavefront sensing and controls, which led to the Webb telescope.
Called the Cosmic Cliffs, Webb’s seemingly three-dimensional picture looks like craggy mountains on a moonlit evening. In reality, it is the edge of the giant, gaseous cavity within NGC 3324, and the tallest “peaks” in this image are about 7 light-years high. NASA/ESA/CSA/STScI
Talk about your work getting the telescope ready for flight. You worked on it for more than 20 years.
Acton: Well, we had to invent all of the wavefront sensing and controls. None of that technology really existed in 2001, so we started from the ground up with concepts and simple experiments. Then more complicated, very complicated experiments and eventually something known as TRL 6 technology—Technology Readiness Level 6—which demonstrated that we could do this in a flightlike environment. And then it was a question of taking this technology, algorithms, understanding it and implementing it into very robust procedures, documentation, and software, so that it could then be applied on the flight telescope.
What was it like finally to launch?
Acton: Well, I’ve got to say, there was a lot of nervousness, at least on my part. I was thinking we had a 70 percent chance of mission success, or something like that. It’s like sending your kid off to college—this instrument that we’d been looking at and thinking about.
The Ariane 5 vehicle is so reliable. I didn’t think there was going to be any problem with it, but deployment starts, basically, minutes after launch. So, for me, the place to be was at a computer console [at the Space Telescope Science Institute in Baltimore].
And then there were a lot of things that had to work.
Acton: Yes, right. But there are some things that that are interesting. They have these things called nonexplosive actuators [used to secure the spacecraft during launch]. There are about 130 of them. And you actually can’t test them. You build them and they get used, basically, once. If you do reuse one, well, it’s now a different actuator because you have to solder it back together. So you can’t qualify the part, but what you can do is qualify the process.
We could have still had a mission if some didn’t fire, but most of them were absolutely necessary for the success of the mission. So just ask yourself, let’s suppose you want to have a 95 percent chance of success. What number raised to the 130th power is equal to 0.95? That number is basically one. These things had to be perfect.
I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.
I think the public perception was that the Webb was in very good shape and the in-flight setup all went very well. Would you say that’s accurate?
Acton: Early on in the mission there were hiccups, but other than that, I’d say things just went beyond our wildest expectations. Part of that comes down to the fact that my team and I had commissioned the telescope 100 times in simulations. And we always made it a little harder. I think that served us well because when we got to the real telescope, it was quite robust. It just worked.
Take us through the process of aligning the telescope.
Acton: The first image we got back from the telescope was 2 February, in the middle of the night. Most people had gone home, but I was there, and a lot of other people were too. We just pointed the telescope at the Large Magellanic Cloud, which has lots and lots of stars in it, and took images on the near-infrared cameras. People were really happy to see these images because they were looking basically to make sure that the science instruments worked.
But some of us were really concerned with that image, because you could see some very significant astigmatism—stronger than we were expecting to see from our simulations. Later we would learn that the telescope’s secondary mirror was off in translation—about 1.5 millimeters along the deployment axis and about a millimeter in the other axis. And the primary mirror segments were clocked a bit from the perfectly aligned state.
Lee Feinberg, the telescope lead at NASA Goddard, texted me and said, “Scott, why can’t you just simulate this to see if you can get some images that bad?” So that morning I ran a simulation and was able to reproduce almost exactly what we were seeing in these images. We realized that we were not going to have any major problems with the wavefront.
Describe the cadence of your work during commissioning. What would a day be like?
Acton: One of the rules we set up very early on was that in terms of wavefront sensing and control, we would always have two people sitting in front of the computers at any given time. Anytime anything significant happened, I always wanted to make sure that I was there, so I got an apartment [near the institute in Baltimore]. From my door to the door of the of the Mission Operations Center was a 7-minute walk.
In this mosaic image stretching 340 light-years across, Webb’s Near-Infrared Camera (NIRCam) displays the Tarantula Nebula star-forming region in a new light, including tens of thousands of never-before-seen young stars that were previously shrouded in cosmic dust.NASA/ESA/CSA/STScI/Webb ERO Production Team
There were certainly times during the process where it had a very large pucker factor, if you will. We couldn’t point the telescope reliably at the very beginning. And a lot of our software, for the early steps of commissioning, depended on the immutability of telescope pointing. We wanted to have the telescope repeatedly pointed to within a couple of arc-seconds and it was closer to 20 or 30. Because of that, some of the initial moves to align the telescope had to be calculated, if you will, by hand.
I remember walking home one night, talking on the phone to my wife, Heidi, and saying, “If I’m wrong about this I’ve just completely screwed up the telescope.” She said, “Scott, that’s why you’re there.” That was her way of telling me to cowboy up. The responsibility had to come down to somebody and in that moment, it was me.
But when the result came back, we could see the images. We pointed the telescope at a bright isolated star and then we could see, one at a time, 18 spots appearing in the middle of our main science detector. I remember a colleague saying, “I now believe we’re going to completely align the telescope.” He felt in his mind that if we could get past that step, that everything else was downhill.
You’re trying to piece together the universe. It’s hard to get it right, and very easy to make mistakes. But we did it.
Building the Webb was, of course, a big, complicated project. Do you think there are any particular lessons to be drawn from it that people in the future might find useful?
Acton: Here are a couple of really big ones that apply to wavefront sensing and control. One is that there are multiple institutions involved—Northrop Grumman, Ball Aerospace, the Goddard Space Flight Center, the Space Telescope Science Institute—and the complication of having all these institutional lines. It could have been very, very difficult to navigate. So very early on we decided not to have any lines. We were a completely badgeless team. Anybody could talk to anybody. If someone said, “No, I think this is wrong, you should do it this way,” even if they didn’t necessarily have contractual responsibility, everybody listened.
Another big lesson we learned was about the importance of the interplay between experimentation and simulation. We built a one-sixth scale model, a fully functional optical model of the telescope, and it’s still working. It allowed us, very early on, to know what was going to be difficult. Then we could address those issues in simulation. That understanding, the interplay between experimentation and modeling and simulations, was absolutely essential.
Recognizing of course, that it’s very early, do you yet have a favorite image?
Acton: My favorite image, so far, was one that was taken during the last real wavefront activity that we did as part of commissioning. It was called a thermal slew test. The telescope has a large sunshield, but the sunshield can be at different angles with respect to the sun. So to make sure it was stable, we aimed it at a bright star we used as a guide star, put it in one orientation, and stayed there for five or six days. And then we switched to a different orientation for five or six days. It turned out to be quite stable. But how do you know that the telescope wasn’t rolling about the guide star? To check this, we took a series of test images with the redundant fine-guidance sensor. As you can imagine, when you have a 6-1/2 meter telescope at L2 away from any competing light sources that is cooled to 50 kelvins, yes, it is sensitive. Even just one 20-minute exposure is going to just have unbelievable detail regarding the deep universe. Imagine what happens if you take 100 of those images and average them together. We came up with an image of just some random part of the sky.
Scott Acton’s favorite Webb image: A test image of a random part of the sky, shot with the Webb’s fine-guidance sensor. The points with six-pointed diffraction patterns are stars; all other points are galaxies. NASA/CSA/FGS
I sent this image to James Larkin at UCLA, and he looked at it and estimated that that single image had 15,000 galaxies in it. Every one of those galaxies probably has between 100 [billion] and 200 billion stars.
I don’t talk about religion too much when it comes to this, but I must have had in my mind a Biblical reference to the stars singing. I pictured all of those galaxies as singing, as if this was a way for the universe to express joy that after all these years, we could finally see them. It was quite an emotional experience for me and for many people.
You realized that there was so much out there, and you weren’t even really looking for it yet? You were still phasing the telescope?
Acton: That’s right. I guess I I’m not sure what I expected. I figured you’d just see dark sky. Well, there is no dark sky. Dark sky is a myth. Galaxies are everywhere.
Finally, we got to our first diffraction-limited image [with the telescope calibrated for science observations for the first time]. And that’s the way the telescope is operating now.
Several days later, about 70 of us got together—astronomers, engineers, and other team members. A member of the team—his name is Anthony Galyer—and I had gone halves several years earlier and purchased a bottle of cognac from 1906, the year that James Webb was born. We toasted James Webb and the telescope that bears his name.
Learn how multiphysics simulation can help you accurately model battery cells and packs
Lithium-ion (Li-ion) batteries are the preferred choice for electric and hybrid vehicles, energy storage systems, and consumer electronics. One of the top safety concerns with Li-ion batteries is thermal runaway and its cascading effect through the whole pack. To predict thermal runaway, it is necessary to account for several different physical phenomena, including chemical reactions within the cell, heat transfer at the cell and pack level, the structural design of the pack, and fluid flow in the battery pack's cooling system.
Join us for this live webinar to see how multiphysics simulation can help you accurately model battery cells and packs, predict thermal runaway, and optimize the design of thermal management for battery packs. The webinar will include demonstrations in the COMSOL software and conclude with a Q&A session.