≡ Menu

While we often discuss expansion into the Solar System as a step leading to interstellar flight, the movement into space has its dark side, as author Daniel Deudney argues in a new book. As Kenneth Roy points out in the review that follows, it behooves everyone involved in space studies to understand what the counter-arguments are. Ken is a newly retired professional engineer who is currently living amidst, as he puts it, “the relics of the Manhattan Project in Oak Ridge, Tennessee.” His professional career involved working for various Department of Energy (DOE) contractors in the fields of fire protection and nuclear safety. As a long-time hobby, he has been working with the idea of terraforming, which he extended to the invention of the “Shell Worlds” concept as a way to terraform planets and large moons well outside a star’s ‘Goldilocks’ zone [see Terraforming: Enter the Shell World].

In 1997, Ken made the cover of the prestigious Proceedings of the U.S. Naval Institute for his forecast of anti-ship, space-based, kinetic energy weapons. With his co-authors R.G. Kennedy and D.E. Fields, he has appeared multiple times in JBIS and Acta Astronautica with papers on terraforming and space colonization. He is a founding member of the not-for-profit corporation Tennessee Valley Interstellar Workshop (TVIW), now operating as the Interstellar Research Group, and remains active in that organization. A graduate of the Illinois Institute of Technology and the University of Tennessee at Knoxville in engineering, Ken tells me he enjoys reading science fiction, history, alternative history, military history, and books on space colonization and terraforming.

Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney (Oxford University Press, 2020).

A review by Kenneth Roy

Professor Deudney teaches political science, international relations, and political theory at Johns Hopkins University. His book can be difficult to read, in large part due to the academic writing style. Although there are a number of interesting arguments in the book, the lack of clarity and conciseness make them somewhat difficult to access. Once you get past the writing style, Deudney argues that humanity’s expansion into space will decrease the probability of human survival. Deudney raises some good questions relative to the future of Earth and actually makes a few good points applicable to humanity’s expansion into the solar system and beyond. Science fiction readers and space enthusiasts will not enjoy this book, but it is important that we try to understand and evaluate Deudney’s arguments, rather than dismiss them out of hand. You should appreciate your enemies; they will point out things that your friends and allies will never mention, things that you probably need to know.

Prometheans argue that scientific and technological advances allow for the total transformation of the human condition, a realization of utopia, with material abundance and even individual immortality. Starting with the industrial revolution, this trajectory seems to be leading to a very positive future for humanity. But around the mid-twentieth century a number of concerns surfaced suggesting a much more pessimistic end to the Promethean vision. The concerns include nuclear and biological weapons, genetic engineering, artificial intelligence, environmental collapse, and even new forms of despotism based on advanced surveillance and coercion technologies. But all technology is always a two-edged sword capable of great good and great harm depending on the intentions and even wisdom of the humans that utilize them. This is this dilemma on which Dr. Deudney bases his central argument. He seems to suggest that because the sword can indeed harm the owner, perhaps he is better off without it. Or if he absolutely must have a sword, he should made it as harmless as possible. He argues that humanity should be able to discern which technologies offer more risk than reward and should thus be proscribed while pursing technologies and policies that offer great reward for only minor risk. He argues that colonization of space and the exploitation of space-based resources belongs in the former category and should be prohibited.

But Deudney isn’t entirely anti-space. He advocates Earth-centered space activities focused on nuclear security and environmental protection. He is okay with communication and weather satellites. He believes that space activities should be used to protected the Earth rather than expand the militarization and colonization of space.

Advocates of humans expanding into space and exploiting the resources there Deudney terms “space expansionists.” He describes space expansionism as a “complex and captivating ideology…that extrapolates and amplifies the Promethean worldview of technological modernism into a project of literally cosmic scope.” He considers space expansionism to be a science-based and technology dependent religion. Space expansionists advocate for human expansion into space and believe that such expansion is desirable both for those lucky enough to work and live in space but also for humanity in general and the Earth in particular. According to Deudney, space expansionists promise humanity a permanent final frontier, as well as knowledge, and material and energy resources almost beyond measure that can help address Earth’s environmental problems. Deudney disagrees and offers a number of arguments that are discussed below.

Two worrisome technologies that Deudney identifies as being advocated by space expansionists are genetic and cybernetic technologies. The first is also termed transhumanism or the improvement of human beings through genetic manipulation. The second is machine enhancement of human bodies and minds or possibly complete replacement of humans with machines with greater intellectual and physical capability. These two developing technologies do indeed pose many ethical questions. They would be useful but not necessary for successful expansion of humanity into space. But even if the human (or transhuman or cybernetic) expansion into space were to be completely banned, the issue does not go away. The transhumanism movement and the development of cybernetic technology will proceed on Earth completely independent of space activities. There is simply too much advantage to be had for those who possess it. Humans of 2020 are not the final evolutional product and Nietzsche’s ubermensch (or Star Trek’s Khan Noonien Sing and his augments) pose important ethical and even existential problems. But these technologies will not be avoided by restricting space expansionism.

A third technology that worries Deudney is nanotechnology. This technology enables construction of materials and machines from basic molecules. The big fear of nanotechnology is the construction of tiny machines that disassemble anything and everything they encounter and use the resulting molecules to make more of themselves, without end, until the entire planet is covered with them. This is known as the ‘gray goo’ scenario and it terminates humanity and indeed all life on the Earth. But nanotechnology is actively being pursued by numerous companies and countries because it has such tremendous potential. Nanotechnology would be very useful for space development but again, not essential.

Artificial intelligence is yet another technology Deudney, and others, are very concerned about. It offers great promise and great peril. Again, because of the potential advantages, it will be developed, and while potentially very useful for space activities is not essential.

These four technologies are intertwined, very powerful, and very dangerous. But because they are potentially so valuable, and so useful, they will be developed by someone at some point. Deudney’s fear that space expansion will accelerate their development, while possibly true, is irrelevant. They will be developed, unless a totalitarian world government using advanced surveillance and coercive technologies prevents it. In that case, the cure would be bad. Very bad, but in this particular case perhaps not as bad as the disease. Deudney fails to recognize that space expansionism offers some prospects for mitigating the risks of these technologies by allowing them to be developed in space at isolated research facilities that can be obliterated should something dangerous escape.

Deudney spends some time discussing the militarization of space. He seems to have associated nuclear-tipped missiles and the resulting nuclear annihilation risk with space expansionism simply because such weapons of mass destruction travel through space and can arrive at any point on earth minutes after launch. He doesn’t acknowledge that the first nuclear weapons were delivered by piston engine aircraft and that today hypersonic cruise missiles can deliver such warheads just fine without going into space. The Russians have nuclear-tipped torpedoes capable of destroying large harbors. Squashing the dreams of space expansionists will not in any way reduce the threat of nuclear war, and can arguably increase it due to resource depletion with increasing population pressures. Ronald Reagan’s Star Wars initiative was actually intended to prevent nuclear weapons from traveling through space, but Deudney views this effort as simply another effort at the militarization of space and thus something to be resisted.

Space (including Earth orbit) is currently effectively demilitarized. No nuclear weapons are stationed in space and no kinetic or beam weapon systems exist that can operate from space. Space technology offers the possibility of Earth orbit being filled with beneficial infrastructure such as communication, surveillance, weather, and positioning satellites, along with solar power stations and even some dirty industries. Deudney points out that with the ability to place this infrastructure in orbit comes the ability to place large weapon systems there as well. Orbital weapon systems would be capable of striking any point on Earth with nuclear, kinetic, or energy beam weapons within minutes of the decision to do so. It is the ultimate high ground and the nation that can achieve unchallenged military control of Earth orbit can dictate to the other nations of Earth, resulting in a de facto world government. But nuclear weapons can be delivered without having to travel through space, somewhat undermining Deudney’s argument.

While a world government would probably use space-based weapons to exert control over troublesome provinces, the argument that space-based weapons would lead to a world government is somewhat weak. The question of the desirability of a world government is very real but is effectively independent of the space colonization question. North Korea stands as a stark warning of what a world government might look like. Its citizens endure starvation and concentration camps while the rulers demand not just total compliance in all actions but sincere correct beliefs. Of course, the ruling elite will live very well indeed. And the North Korean political system cannot be overthrown from within. Only external forces. can remove the current system or force it to moderate its actions. A world government based on the North Korean model with advanced surveillance and coercive technologies would have no external threats to force it to moderate its actions or ever remove it from power. One possible exception to this is human colonies on Mars or in the asteroid belt. They might serve to act as the outside force keeping the world government in check, at least somewhat.

Asteroids are common throughout the solar system and occasionally will smash into Earth. Sometimes with negative consequences. Just ask the dinosaurs how that turned out for them. It has been said that asteroids are nature’s way of asking, “How is your space program coming?” Space expansionists claim asteroid protection as one reason to go into space in a big way: to protect the Earth. But Deudney points out that the ability to deflect an asteroid also implies the ability to direct an asteroid to a specific destination. With such an ability in the wrong hands this actually increases the probability of a massive asteroid impact with Earth, rather than reducing it.

Deudney suggests that space settlements have a dark side. The term space settlements as used by Deudney includes lunar colonies, artificial space habitats (O’Neil cylinders, Stanford tori, Bernal spheres, etc), asteroid settlements, and terraformed worlds. Building space settlements involves material engineering and high energies suitable for warfare. This represents a variant of the asteroid problem: in the wrong hands, this technology could do terrible things.

Terraforming is the transformation of a planet, such as Mars or Venus, to resemble the Earth and support human and other Earth life forms. Terraforming requires high energies, long time periods, and the transport of large masses around a solar system. Deudney points out that the ability to make a dead planet live also implies the ability to make a living world sterile.

In addition, space settlements individually will contain thousands, or at most a few million individuals. The life support systems and structural integrity are fragile things requiring a high degree of trust and/or control of the population to identify and remove unstable or dangerous individuals. Rather than being islands of freedom, space settlements could become, and maybe must become, micro-totalitarian states. And like the Greek city states of antiquity, they may find reasons to war amongst themselves, and perhaps with Earth. And they will war with weapons far deadlier than anything carried by the Greeks.

As space settlements are built further and further out into the outer edges of the solar system, perhaps around gas giants and their moons, they become isolated. Over time, humanity could branch into new species, perhaps unable to breed with each other. Rather than encounter aliens, we will create them. With the aid of genetic engineering and cybernetics, discussed above, this divergence could occur relatively quickly. Even with the Central Earth government and most space settlements agreeing to forgo genetic engineering and cybernetic modification of humans, it only takes one isolated space settlement to pursue this line of research to produce something quite alien and perhaps anti-human.

To the best of my ability, I have tried to identify and list here all arguments that Deudney has identified as reasons that space expansionism can decrease the probability of humanity’s survival. Many of his issues are indeed existential threats to humanity but not because of what the space expansionists propose. But they are deserving of serious consideration. These include genetic engineering, cybernetics, nanotechnology, and AI. They are real threats but also real opportunities.

Expanding into space places god-like destructive powers into the hands of those moving asteroids or large mass space freighters. In all likelihood, propulsion systems will utilize fusion power of some type, again giving god-like destructive powers of a different nature. Interstellar missions will be capable of moving large masses at some percentage of the speed of light. Take a space shuttle, run it up to only10% the speed of light and you have a planet killer. We should ensure that individuals embarking on the interstellar missions have a deep respect for, and love of, Earth. How do we protect Earth from the even one slightly deranged or evil individual who has control of an asteroid (or starship) and can direct it at a target of his, or her, choice? Space expansionists need to address this question. Are we looking at a priest-hood type space patrol, or something else?

But perhaps the big takeaway from Deudney’s effort involves government and how humanity will choose to govern itself. Globalists view a single world government as a means to reduce violence and warfare on Earth, perhaps ending the existential thread of nuclear war once and for all. Others view a single world government as a threat to freedom and a short journey to a totalitarian nightmare. But can a single world government control a solar system with dozens of lunar settlements, thousands of asteroid settlements, perhaps a couple of terraformed planets each with a growing population in the millions or even billions, and thousands of space settlements, some of which exist in the Oort cloud? Then add in genetic engineering, cybernetics, and AI, and you have something new in human experience. How is conflict resolved? Are there indeed dangerous technologies that should be proscribed, and if so, how is that done? How does all of this relate to the Fermi Paradox? Once interstellar missions are underway, the questions only multiply. It is unclear what the answer is to this problem, which does not mean that there is no solution. The space expansionist’s dreams face countless problems and this needs to be added to the list.

Deudney perhaps overstates his case and many of his arguments are flawed, but he does raise some valid points. Points that space expansionists need to address. Looking into the future, questions of how humanity deals with Star Trek’s Khan Noonien Sing and his augments (or if you like, Nietzsche’s ubermensch) are very real and very important but separate from the space expansion question.

Deudney is also correct in that Earth is vital to future human expansion into the solar system and must be preserved at all costs. Space settlements and asteroid settlements will probably depend on living systems that must be renewed periodically by importing plants and animals and bacteria and viruses from Earth. Terraforming planets depends on life from Earth and even space settlements and terraforming efforts around distant stars will depend on life imported from Earth. Earth must be preserved for space expansionists to realize their visions.

The Universe has a number of methods available to it to sterilize entire planets. Deudney mentions asteroid impacts. He doesn’t address gamma ray bursts (GRBs). If we can deal with the unstable or evil individual problem, then space expansionists can protect Earth from asteroids and comets, and even the occasional runaway space freighter. But GRBs arrive with little warning and can irradiate Earth and other terraformed planets with intense levels of gamma rays, destroying the ozone layer and leading to an environmental disaster with eventual mass extinctions. But space settlements can be built with very heavy shielding and have no ozone problem. They could survive a GRB far better than a planet. Space-based colonies could then render aid to Earth, repairing the ozone layer and restoring the biosphere using techniques developed for terraforming.

Yes, Deudney is correct, the dreams of the space expansionists represent a two-edged sword for humanity. But sometimes a sharp sword is all that stands between you and eternal darkness.

tzf_img_post
{ 6 comments }

Cometary Alignments and the Galactic Tide

A second ecliptic? What an interesting notion, referred to in a new paper from Arika Higuchi as an ‘empty ecliptic,’ constituting a second alignment plane for the Solar System. This is lively stuff, examined in a new paper in the Astronomical Journal that focuses on the aphelia of long-period comets, the points where they are farthest from the Sun in their orbit. The solutions arrived at through the paper’s dense mathematics show that the aphelia fall close to one or the other of the ecliptic planes, and offer insights into comet formation.

Higuchi (University of Occupational and Environmental Health, Japan) has previously been a part of the National Astronomical Observatory of Japan’s RISE project, RISE standing for Research of Interior Structure and Evolution of solar system bodies. Her work on the orbital evolution of planetesimals goes back at least to 2007 in a paper on the formation of the Oort Cloud, considering the effects of interactions with the ‘galactic tide,’ a reference to the influence of the gravitational field of the galaxy on Solar System objects analyzed through the equations governing orbital motion. The new paper is an extension of the 2007 work, one that derives “the analytical solutions to the Galactic longitude and latitude of the direction of aphelion.”

We know that long-period comets are not confined to the ecliptic, but models of the Solar System’s formation suggest that they formed on the ecliptic and were subsequently scattered through gravitational interactions into the orbits we see today. What Higuchi finds is that even given scattering interactions with the gas giant planets, the cometary aphelia should remain near the ecliptic. That they do not necessarily do so calls for an explanation that the author finds in the influence of the Milky Way’s gravitational field. It turns out that the aphelia of long-period comets, when this influence is taken into account, tend to collect around two planes.

The first is, as you would expect, the ecliptic with which we are all familiar. The second is the ‘empty ecliptic,’ a plane inclined with respect to the disk of the Milky Way by about 60 degrees, just as the ecliptic itself is inclined by 60 degrees, but in the opposite direction. Here we’re cautioned to be careful with the label, because the empty ecliptic is ‘empty’ only in the early days of the system. Over time, it comes to be populated with scattered comets, a population that has implications for how we go about finding long-period comets in the future.

In the passage from the paper that follows, L and B refer to the galactic longitude and latitude of the direction of aphelion. q refers to the perihelion distance, while i refers to inclination with respect to the ecliptic plane:

The concentration of long-period comets from the Oort cloud on the ecliptic and empty ecliptic planes is an observational evidence that the Oort cloud comets were planetesimals initially on the ecliptic plane. We expect the concentrations even when we consider the effect of passing stars. Perturbations from passing stars change the conserved quantities and may break the relation between q, B, and L more or less; however, it takes a much longer time to change the eccentricity vector (i.e., L and B) than to change i (Higuchi & Kokubo 2015). Therefore, we suggest that observers, including the space mission Comet Interceptor, focus on the ecliptic plane and/or the empty ecliptic plane to find dynamically new comets.

Image: Artist’s impression of the distribution of long-period comets. The converging lines represent the paths of the comets. The ecliptic plane is shown in yellow and the empty ecliptic is shown in blue. The background grid represents the plane of the Galactic disk. (Credit: NAOJ).

Higuchi cross-checked her mathematical results against numerical computations run largely at NAOJ’s PC Cluster at the Center for Computational Astrophysics. She is able to show that the analytical and computational results she derives square with the data for long-period comets listed in NASA’s Small Body Database at JPL, identifying the two peaks expected near the ecliptic and the empty ecliptic. This favors the hypothesis that long-period comets originally formed on the ecliptic. We can use upcoming surveys to refine these results. Says the author:

“The sharp peaks are not exactly at the ecliptic or empty ecliptic planes, but near them. An investigation of the distribution of observed small bodies has to include many factors. Detailed examination of the distribution of long-period comets will be our future work. The all-sky survey project known as the Legacy Survey of Space and Time (LSST) will provide valuable information for this study.”

What an interesting result. We consider long-period comets from the Oort Cloud as forming on the ecliptic plane just as the planets did, but now we move to the view that their orbital evolution must be examined not just in terms of interactions with large objects in the early system, but also with the gravitational tide of the galaxy. You’ll recall that the aphelia of various objects in the Solar System have been considered in terms of possible perturbers within the system, including hitherto undiscovered planets. Their potential for unlocking long-period comet distribution in useful ways is one I had never considered until I ran into Higuchi’s work this morning.

The paper is Higuchi, “Anisotropy of Long-period Comets Explained by Their Formation Process,” Astronomical Journal Vol. 160, No. 3 (26 August 2020). Abstract / preprint. The 2007 paper is Higuchi et al., “Orbital Evolution of Planetesimals due to the Galactic Tide: Formation of the Comet Cloud,” Astronomical Journal Vol. 134, No. 4 (29 August 2020). Abstract.

tzf_img_post
{ 4 comments }

WASP-189b: An Impressive Debut for CHEOPS

The European Space Agency’s CHaracterising ExOPlanet Satellite (CHEOPS) space telescope reached space in December of 2019, achieving a Sun-synchronous orbit some 700 kilometers up. The instrument has begun its observations of stars near the Sun that are already known to have planetary companions. The idea is to use the 30 cm optical telescope to constrain radius information for these worlds, previously identified in transit and radial velocity studies.

Transiting planets are particularly useful here, because tightening up their radius measurements means we get a better idea of their density, factoring in mass estimates provided by subsequent radial velocity follow-ups. It’s great to see the instrument already hard at work, with measurements of the giant planet WASP-189b, some 325 light years from the Sun, showing us a world that is one of the hottest known, with a likely temperature around 3400℃. By comparison, the surface temperature of the Sun is about 6000 ℃, while smaller M dwarfs can have temperatures below that of this incendiary planet. A planet hotter than some main sequence stars.

Image: CHEOPS observations of WASP-189b in front of and behind its star. Credit: ESA.

First detected in 2018, WASP-189b turns out to have an equatorial diameter of about 220,000 kilometres, making it 1.6 times larger than Jupiter, a figure 15% higher than previous estimates. Also noteworthy is its highly inclined orbit, which moves close to the poles of the star and is detected by studying both transit and occultation (as the planet moves behind the star). At 7.5 million kilometers, WASP-189b is 20 times closer to its primary than Earth, completing a revolution in a scant 2.7 days. Monika Lendl (University of Geneva), is lead author of the paper on this work:

“Because the exoplanet WASP-189b is so close to its star, its dayside is so bright that we can even measure the ‘missing’ light when the planet passes behind its star; this is called an occultation. We have observed several such occultations of WASP-189b with CHEOPS. It appears that the planet does not reflect a lot of starlight. Instead, most of the starlight gets absorbed by the planet, heating it up and making it shine.”

The A-class host star is itself notable, an object that rotates fast enough to deform itself, with an equatorial radius greater than the polar radius, so that it is cooler at the equator than at the poles. The poles thus appear brighter in the CHEOPS data because of this asymmetry, an effect that the authors are able to use to determine the spin-orbit angle of the planet. Needless to say, the planet’s highly inclined orbit raises questions abouts its formation, given that we would expect both star and planet to have developed from a common disk of gas and dust. Past gravitational interactions are likely the cause, forcing the planet to migrate inward.

From the paper:

WASP-189 b is one of the most highly irradiated planets known thus far, with a dayside equilibrium temperature of ∼ 3400 K (Anderson et al. 2018). It orbits an early-type star similarly to the extreme object KELT-9b (Gaudi et al. 2017), but with a longer orbital period of 2.7 days, placing it closer, in temperature, to ultra-short period planets orbiting F and G stars. As such, this object allows us to comparatively probe the impact of different stellar spectral energy distributions and, in particular, strong short wavelength irradiation on planetary atmospheres. As it is orbiting around an A-type star, the system is also relatively young (730 ± 130 Myr, see Section 2.2), providing us with a window into the atmospheric evolution of close-in gas giants.

So CHEOPS has given us a tighter look at WASP-189b, obviously useful information, but what this paper really demonstrates is the power of the space observatory at detecting extremely shallow signals, as is necessary to gauge brightness variations between pole and equator in stars like the primary here. From this we learn about the planet’s unusual polar orbit. We can look forward to measurements of much cooler planets at similar high levels of precision. The best-case scenarios, discussed in the paper, will be objects for which CHEOPS can make phase curve studies that reveal information on the distribution of clouds in planetary atmospheres The resulting transit and occultation catalog will be large and instructive.

Image: CHEOPS results of the observation of WASP-189b. Credit: ESA.

The paper is Lendl et al., “The hot dayside and asymmetric transit of WASP-189 b seen by CHEOPS,” in process at Astronomy & Astrophysics (abstract / preprint).

tzf_img_post
{ 9 comments }

New Approaches to the Age of Saturn’s Moons

The presence of the always intriguing Titan brings into sharper focus recent work on the age of the moons of Saturn conducted by Samuel Bell (Planetary Science Institute). Given the active weathering visible on Titan, the assumption that it is at least four billion years old, which draws on earlier work on the age of Saturn’s moon system, is challenged by the lakes, mountains, riverbeds and dunes we see in the Cassini data. Bell argues that an older Titan would have to be one with an extremely low erosion rate and minimal resurfacing.

But maybe Titan is younger than we’ve thought. Bell assembles the context of Titan in the overall system at Saturn by studying the cratering rate on the various moons. Determining the age of a planetary surface — think Mars or the Moon — is generally done by counting the impact craters and weighing this against the cratering rate. At Saturn, the problem is that the cratering rate is not known. It would be one value if, as previous work has assumed, the craters on the Saturnian moons all came from objects orbiting the Sun. Bell wondered if this was true:

“If the impacts came solely from Sun-orbiting objects, the relative cratering rate would be much, much higher the closer the moons are to Saturn. However, the crater densities of the oldest surfaces of Mimas, Tethys, Dione, Rhea, and Iapetus are all relatively similar. It would be too much of a coincidence for the ages of the oldest surfaces on each moon to vary by the exact amounts necessary to produce broadly similar crater densities. As a result, it seems much likelier that the impactors actually come from objects orbiting Saturn itself, moonlets that would be too small to detect with current technology.”

Image: This mosaic of Saturn’s moon Mimas showing its cratered surface was created from images taken by NASA’s Cassini spacecraft. Credit: NASA/JPL-Caltech/Space Science Institute.

A new chronology emerges if we accept this model. Saturn-orbiting impactors allow a younger age to be calculated, one that, for Titan, more clearly squares with the Cassini data.

Bell is clear about the factors of system age that we have yet to explain, and acknowledges than an older Titan is still possible. From the paper:

I… prefer a model of dominantly planetocentric cratering, with an impactor production function that probably does not vary by more than a factor of ~5 between Mimas and Iapetus. This planetocentric cratering model makes the young moons hypothesis possible and implies that the cratered plains of Mimas, Tethys, Dione, Rhea, and Iapetus are of broadly similar age. Under this model, the surface of Titan is definitely younger than the cratered plains of Rhea and Iapetus, and it could easily be much, much younger. However, due to lack of constraints on the planetocentric cratering rate and how it varies with time, the planetocentric model provides very limited constraints in terms of
absolute age. While it suggests a vigorously resurfaced Titan with a young surface, the model cannot rule out a surface of Titan that dates back to the early solar system, a very old surface with a very slow erosion rate and negligible endogenic resurfacing.

I bring all this up this morning to add context to a 2019 paper on Saturn’s moons from Marc Neveu (NASA GSFC) and Alyssa Rhoden (Southwest Research Institute). In “Evolution of Saturn’s Mid-sized Moons,” the duo make the case that the orbits of Mimas, Enceladus, Tethys, Dione and Rhea are hard to square with their geology. From their paper:

The moons’ ages are debated. Their crater distributions, assuming Sun-orbiting impactors extrapolated from present-day observed small-body populations, suggest surfaces billions of years old. Conversely, the measured fast expansion of their orbits, probably due to tides raised by the moons on Saturn, indicates— assuming dissipation levels that are constant over both time and frequency of tidal excitation—that this relatively compact moon system is less than a billion years old. This could explain why some moons may not have encountered predicted orbital resonances, and supports scenarios of non-primordial formation from debris of the tidal or collisional disruption of progenitor moons.

The scientists have run numerical simulations coupling geophysical and orbital evolution over a 4.5 billion year period, with the orbits expanding with time through tidal effects. For the overview, let me just quote the abstract below, as I’m short on time this morning. But notice the ramifications of system age for another interesting moon, Enceladus:

Dissipation within the moons decreases their eccentricities, which are episodically increased by moon−moon interactions, causing past or present oceans to exist in the interiors of Enceladus, Dione and Tethys. In contrast, Mimas’s proximity to Saturn’s rings generates interactions that cause such rapid orbital expansion that Mimas must have formed only 0.1−1 billion years ago if it postdates the rings. The resulting lack of radionuclides keeps it geologically inactive. These simulations explain the Mimas−Enceladus dichotomy, reconcile the moons’ orbital properties and geological diversity, and self-consistently produce a recent ocean on Enceladus.

But back to Samuel Bell, who is clearly right about how meager our knowledge of the evolution of this system of moons really is:

“With the new chronology, we can much more accurately quantify what we do and don’t know about the ages of the moons and the features on them. The grand scale history of the Saturn system still hides many mysteries, but it is beginning to come into focus.”

The paper is Bell, “Relative Crater Scaling Between the Major Moons of Saturn: Implications for Planetocentric Cratering and the Surface Age of Titan,” Journal of Geophysical Research Planets 26 May 2020 (abstract). The Neveu and Rhoden paper is “Evolution of Saturn’s mid-sized moons,” Nature Astronomy 3 (1 April 2019), 543-552 (abstract).

tzf_img_post
{ 5 comments }

Is there a single technology that can take us from being capable of reaching space to actually building an infrastructure system-wide? Or at least getting to a tipping point that makes the latter possible, one that Nick Nielsen, in today’s essay, refers to as a ‘space breakout’? We can think of game-changing devices like the printing press with Gutenberg’s movable type, or James Watt’s steam engine, as altering — even creating — the shape and texture of their times. The issue for space enthusiasts is how our times might be similarly altered. Nick here follows up an earlier investigation of spacefaring mythologies with this look at indispensable technologies, forcing the question of whether there are such, or whether technologies necessarily come in clusters that enforce each other’s effects. The more topical question: What is holding back a spacefaring future that after the Apollo landings had seemed all but certain? Nielsen, a frequent author in these pages, is a prolific writer whose work can be tracked in Grand Strategy: The View from Oregon, and Grand Strategy Annex.

by J. N. Nielsen

1. Another Hypothesis on a Sufficient Condition for Spacefaring Civilization
2. The Nineteenth Century and the Steam Engine
3. The Twentieth Century and the Internal Combustion Engine
4. The Twenty-First Century and the Energy Problem
5. The World That Might Have Been: Accessible Fission Technology
6. Nuclear Rocketry as a Transformative Technology
7. Practical, Accessible, and Ubiquitous Technologies
8. The Potential of an Age of Fusion Technology
9. Indispensability and Fungibility
10. Four Hypotheses on Spacefaring Breakout

1. Another Hypothesis on a Sufficient Condition for Spacefaring Civilization

Civilization is the largest, the longest lived, and the most complex institution that human beings have built. As such, describing civilization and the mechanisms by which it originates, grows, develops, matures, declines, and becomes extinct is difficult. It is to be expected that there will be multiple explanations to account for any major transition in civilization. At our present state of understanding, the best we can hope to do is to rough out the possible classes of explanations and so lay the groundwork for future discussions that penetrate into greater depth of detail. It is in this spirit that I want to return to the argument I made in an earlier Centauri Dreams post about the origins of spacefaring civilization.

The central argument of Bound in Shallows was that, while being a space-capable civilization is a necessary condition of being a spacefaring civilization, an adequate mythology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization. According to this argument, the contemporary institutional drift of the space program and of our civilization is a result of no contemporary mythology being readily available (or, if available, such a mythology remains unexploited) to serve as the social framework within which a spacefaring breakout could be understood, motivated, rationalized, and justified.

In the present essay I will consider an alternative hypothesis on the origins of spacefaring civilization, again building on the fact that we are, today, a space-capable civilization that has not as of yet, however, experienced a spacefaring breakout. The alternative hypothesis is that a key technology is necessary to great transitions in the history of civilization, and that a key technology is like the keystone of an arch, which when present constitutes a stable structure that will endure, but, when absent, the structure collapses. Successful civilizations see a sequence of key technologies that are exploited at a moment of opportunity that allows civilization to internally revolutionize itself and so avoid stagnation. I will call this the technological indispensability hypothesis.

There are many key technologies that could be identified—the bone needle, agriculture, written language, the moveable type printing press—each of which represented a major turning point in human history when the technology in question was exploited to the fullness of its potential. We will take up this development relatively late in the history of civilization, beginning with the steam engine as the crucial technology of the industrial revolution, and therefore the technology responsible for the breakthrough to industrialized civilization.

[Indian & Primose Mills steam engine, built in 1884, in service until 1981]

2. The Nineteenth Century and the Steam Engine

The nineteenth century belonged to steam power, which both built upon previous technological innovations as well as laying the groundwork for the large-scale exploitation of later technologies. But it was steam power that enabled the industrial revolution, which was an inflection point in human agency, both in terms of human ability to reshape our environment and the human ability to harness energy for human use on ever-greater scales. Without the rapid adoption and large-scale exploitation of steam engine technologies for shipping, railways, resource extraction, and industrial production as the model for industrialized civilization, later technological developments (like the internal combustion engine or the electric motor) probably would not have been so effectively exploited.

Almost two hundred years of continuous development built on prior technologies from the earliest steam devices (not counting earlier steam turbines such as that of Hero of Alexandria, which was not a stepping stone to later developments building on this technology) to James Watt’s steam engine. A series of inventors, starting in the early seventeenth century—Giovanni Battista della Porta (1535-1615), Jerónimo de Ayanz y Beaumont (1553-1613), Edward Somerset, second Marquess of Worcester (1602-1667), Denis Papin (1647-1713), Thomas Savery (1650-1715), and Jean Desaguliers (1683-1744)—created steam-powered devices of increasing efficiency and utility. And, of course, while James Watt’s steam engine was the culmination of these developments, it was not an end point of design, but the point of origin of exponential technological improvements that followed.

The technology of the steam engine, then, could be construed as a key technology that enabled the industrial revolution. Previous labor-saving technologies—not only earlier forms of the steam engine as implied by the evolution of that technology, but also water mill and windmill technology known since classical antiquity—were limited by their inefficiency and by the sources of energy they harvested. The steam engine, once understood, was capable of increasing efficiency both through improved design and precision engineering, and it allowed human beings to tap into sources of energy sufficiently plentiful and dense that powered machine works could, in principle, be installed at almost any location and be operated continuously for as long as fuel could be supplied (which supply was facilitated by the energy density of the fuel, first coal for steam technologies, then oil for the internal combustion engine).

About fifty years after Watt’s later iterations of his steam engine design, Sadi Carnot published Réflexions sur la puissance motrice du feu et sur les machines propres à développer cette puissance (Reflections on the Motive Power of Fire and on Machines Fitted to Develop that Power, 1824), and in so doing systematically assimilated steam engine technology to the conceptual framework of science. It was this scientific understanding of what exactly the steam engine was doing that made it possible to improve the technology beyond the limits of tinkering (or what we might today call “hacking”). As we shall see, however, the full exploitation of a transformative technology seems to require both scientific development and practical tinkering.

In regard to my thesis in Bound in Shallows, mythologies present in the Victorian age that enabled the exploitation of steam technology could include the belief in human progress and belief in the distinctive institutions of Victorian society. To take the latter first, in The Victorian Achievement I argued that the ability for Victorian England to keep itself intact despite the wrenching changes wrought by the industrial revolution was key to the success of the industrial revolution: “[Victorian civilization] achieved nothing less than the orderly transition from agricultural civilization to industrialized civilization.”

At the same time that a civilization must internally revolutionize itself in order to avoid stagnation, it must also provide for continuity by way of some tradition that transcends the difference between past, present, and future. The ideology of Victorian society made this possible for England during the industrial revolution. A sufficiently large internal revolution that fails to maintain some continuity of tradition could result in the emergence of a new kind of civilization that must furnish itself with novel institutions or reveal itself as stillborn. If the population of a revolutionized civilization cannot be brought along with the radical changes in social institutions, however, the internal revolution, rather than staving off stagnation, simply becomes an elaborate and complex form of catastrophic failure in which a society approaches an inflection point and cannot complete the process, coming to grief rather than advancing to the next stage of development.

It has become a commonplace of historiography that nineteenth century Europe, and Victorian England in particular, believed in a “cult of progress”; the studies on this question are too numerous to cite. A revisionary history might seek to overturn this consensus, but let us suppose this is true. If belief in progress distinctively marked the nineteenth century engagement with the earliest industrial technologies, we can regard this as an antithetical state of mind to what Gilbert Murray called a “failure of nerve” [1], and as such a steeling of nerve may have been what was necessary for a previously agricultural economy to find itself rapidly transformed into an industrialized economy and to survive the transition intact.

At this point, we can equally well argue for the indispensability of technology or the indispensability of mythology in the advent of a transformation in civilization, but now we will pass over into further developments of the industrial revolution. After the age of the steam engine, the twentieth century belonged to the internal combustion engine burning fossil fuel. It was the internal combustion engine that drove technological and economic modernity first revealed by steam technology to new heights.

[The Wärtsilä-Sulzer RTA96-C internal combustion engine]

3. The Twentieth Century and the Internal Combustion Engine

The key technology of the twentieth century, and the successor technology to the steam engine, was the internal combustion engine. The first diesel engine was built in 1897, and the diesel engine rapidly found itself employed in a variety of industrial applications, especially in transportation: shipping, railroads, and trucking. Two-stroke and four-stroke gasoline engines converged on practical designs in the late nineteenth and early twentieth century and began to replace steam engines in those applications where diesel engines had not already replaced steam.

The internal combustion engine has a fuel source that can be stored in bulk (also true for steam engines), and it is scalable. The scalability of the internal combustion engine often goes unremarked, but it is the scalability that ensured the penetration of the internal combustion engine into all sectors of the economy. An internal combustion engine can be made so small and light that it can be carried around by one person (as in the case of a yard trimmer) and it can be made so large and powerful that it can used to power the largest ships ever built. [2] The internal combustion engine is sufficiently versatile that it can be dependably employed in automobiles, trucks, trains, ships, power generation facilities, and industrial applications.

While it would be misleading to claim that the internal combustion engine was revolutionary to the degree that the steam engine was revolutionary, it would nevertheless be accurate to say that the internal combustion engine allowed for the expansion and consolidation of the industrialized civilization made possible by the steam engine.

The internal combustion engine proliferated at a time when the belief in the institutions of societies undergoing industrialization weakened and arguably has never recovered, so that it would be difficult to argue that the ongoing industrial revolution was driven by a distinctive mythology, whereas the continued development and refinement of the crucial technologies of industrialization continued to advance even as the core mythologies of industrializing societies were questioned as never before. At this point, technology looks more indispensable to ongoing industrialization than does mythology.

The experience of the First World War was a turning point both in technology and social change. I have called the First World War the First Global Industrialized war; for the first time, the war effort was existentially dependent upon fossil fuel powered trains, trucks, motorcycles, aircraft, and tanks, which transformed the experience of combat, so that German soldiers thereafter spoke of the “frontline experience” (Fronterlebnis). Even while all traditional warfighting seemed to vanish as being irrelevant (heroic cavalry charges no longer carried the day or turned the tide), a new kind of industrialized war experience appeared, and we can find this experience not merely described but celebrated by Ernst Jünger in Storm of Steel, Copse 125, and other works.

The war led to the destruction of many political regimes in Europe that had endured for hundreds of years, and saw the appearance of radical new regimes like Soviet Russia, which emerged from the wreckage of Tsarist Russia, which could trace its origins back almost a millennium. Whether these ancient regimes were the victims of a mythology that catastrophically failed in the midst of industrialized warfare, or whether the failed regimes brought down traditional mythologies with them, is probably a chicken-and-egg question. But even as ancient regimes and their associated mythologies failed, technology triumphed, and with technology there arose new forms of human experience, the principal driver of which new experiences was continued technological innovation.

[Reactor dome being lowered into place at Shippingport Atomic Power Station in Pennsylvania]

4. The Twenty-First Century and the Energy Problem

Both steam engines and internal combustion engines exploited the energy of fossil fuels. What economists would call the negative externalities of the trade in fossil fuels that grew in the wake of the adoption of the internal combustion engine included the “resource curse,” which marred the political economy of many nation-states that possessed fossil fuels, and extensive pollution resulting from the extraction, refining, transportation, and consumption of fossil fuels. No one could have guessed, at the beginning of the twentieth century (much less at the beginning of the nineteenth century), the monstrosity that fossil fueled internal combustion engines would become, and, by the time our civilization was utterly dependent upon the internal combustion engine, it was too late to do anything except to attempt to mitigate the damage of the entire energy infrastructure than had been created to fuel our industries.

Having realized, after the fact, the dependency of industrialized civilization upon fossil fuels, we find ourselves and our society dependent upon industries that have high energy requirements, but lacking the technology to replace these industries at scale. We are trapped by our energy needs.

I am not going to attempt to summarize the large and complex issues of the advantages and disadvantages of energy alternatives, as countless volumes have already been devoted to this topic, but I will only observe that an abundant and non-polluting source of energy is necessary to the continued existence of technological civilization. We can have civilization without abundant and non-polluting sources of energy, but it will not be the energy-profligate civilization we know today. If energy is non-abundant, it must be rationed; and if energy is polluting, we will gradually but inevitably poison ourselves on our own wastes. Both alternatives are suboptimal and eventually dystopian; neither lead to future transformations of civilization that transcend the past by attaining greater complexity.

Just as there are those who argue for the continuing exploitation of fossil fuels without limit, and who appear to be prepared to accept the consequences of this unlimited use of fossil fuels, there are also those who argue for the abandonment of fossil fuels without any replacement, so that our fossil fuel dependent civilization must necessarily come to an end. Among those who argue for the abolition of energy-intensive industry, we can distinguish between those who advocate the complete abolition of technological civilization (Ted Kaczynski, John Zerzan, Derrick Jensen) and those who look toward a kind of “small is beautiful” localism of “eco-communalism” [3] that would preserve some quality of life features of industrialized civilization while severely curtailing consumerism and mass production.

Human beings would accept sacrifices on this scale, including sacrificing their energy demands, if they believed their sacrifice to be meaningful and that it contributed to some ultimate purpose (or what Paul Tillich called an “ultimate concern”). In other words, a sufficient mythological basis is necessary to justify great sacrifices. We have seen intimations of this level of ideological engagement and call to sacrifice with the most zealous environmental organizations, such as Extinction Rebellion — the “Red Brigade” protesters present themselves with a theatricality that is certain to attract some while repelling others; I personally find them deeply disturbing—which cultivates a quasi-religious intensity among its followers. It is unlikely that those who came to maturity within a technological civilization fully understand what the implied sacrifices would entail, but that is irrelevant to the foundation of the movement; if the movement were to be successful, the eventual regret of those caught up in it would not arrest the progress of a new ideology that sweeps aside all impediments to its triumph.

The proliferation of environmental groups since the late twentieth century (the inflection point is often given as being the publication of Rachel Carson’s Silent Spring in 1962) demonstrates that this is a growing movement, but it is not clear that the most zealous groups can seize the narrative of the movement and become the focus of environmental activism. If, however, individuals were inspired by a quasi-religious zealotry to sacrifice energy-intensive living, we cannot rule out the possibility that the intensity of environmental belief could pave the way, so to speak, toward a transformative future for civilization that did not involve energy resources equal to or greater than those in use at present.

Energy resources equal to or greater than those in use today are crucial to any other scenario for the continuation of civilization. In the same way that eight billion or more human beings can only be kept alive by a food production industry scaled as at present, and to tamper with this arrangement would be to court malnutrition and mass starvation, so too eight billion human beings can only be kept alive by an energy industry scaled as at present, and to tamper with this arrangement would be to court disaster. This disaster could be borne if everyone possessed a burning faith in the righteousness of energy sacrifice, but in planning for the needs of mass society we may need to eventually recognize mass conversion experiences, but such cannot be the basis of policy; there is no way to impose this kind of belief.

One of the persisting visions of a solution for the energy problem of the twenty-first century is widely and cheaply available electricity that can be used to power electrical motors that would replace the fossil fueled engines that now power our industrialized economy. Throughout the nineteenth century dominance of the steam engine and the twentieth century dominance of the internal combustion engine, electric motors were under continual development and improvement. Electric motors came into wide use in industrial applications in the twentieth century, and into limited use for transportation, especially in streetcars when electrical power could be supplied by overhead lines. This can and has been done for longer distance electric railways as well, but the added infrastructure cost of not only laying track, but also constructing the electrical power distribution lines limited electrical train development. For ships and planes, electrical power has not been practicable to date. Only now, in the twenty-first century, are electrical technologies advancing to the point that electrical aircraft may become practical.

The problem is not electrical motors, but the electricity. Providing electricity at industrial scale is a challenge, and we meet that challenge today with fossil fuels, so that even if every form of transportation (automobiles, buses, trucks, shipping, trains, aircraft, etc.) were converted to electrical motors, the electricity grid supplying the electrical needs for these applications would still involve burning fossil fuels. A number of well-heeled businesses have recognized this and installed solar power panels on the roofs of their garages so that their well-heeled employees can plug in their electric cars while they work. This is an admirable effort, but it is not yet a solution for transportation at the scale demanded by our civilization.

If the electrical grid could either be developed in the direction of highly distributed generation with a large number of small electricity sources feeding the grid (which could well be renewables), or a continuation of the centralized generation model but without the fossil fuel dependency of coal, oil, and natural gas generating facilities, the use of electricity as the primary energy for industrial processes could be achieved with a minimum of compromises (primarily those compromises entailed by the difficulty of storing electricity, i.e., the battery problem). What would replace centralized generation if fossil fuel use were curtailed? There is the tantalizing promise of fusion, but before this technology can supply our energy needs, it would have to be shown to be practicable, accessible, and ubiquitous, which is an achievement above and beyond proof-of-concept for better-than-break-even fusion. At present, there seem to be few alternatives to nuclear fission.

The twenty-first century energy problem is the problem of the maintenance of the industrialized civilization that was built first upon steam engines and then upon the internal combustion engine; it is partially a problem of the direction our civilization will take, but it is not a problem of managing a transformative technology and the social changes driven by the introduction of a transformative technology. The initial introduction of powered machinery was such a transformative technology, but the ability to continue the use of powered machinery is no longer transformative, merely a continuation of more of the same.

It is as though we find ourselves, in the early twenty-first century, groping in the dark for a way forward. There is no clear path for the direction of civilization (which would include a clear path to energy resources commensurate with our energy-intensive civilization), and no consensus on defining a clear path forward. This absence of a clear path forward can be construed as a mythological deficit, or as the absence of a crucial technology. Here, I think, the balance of the argument favors a mythological deficit, because we possess nuclear technology, but no mythology surrounds the use of nuclear technology that would rationalize and justify its use at industrial scale—or, at least, no mythology sufficiently potent to overcome the objections to nuclear power.

[The unbuilt Clinch River Breeder Reactor Project (CRBRP)]

5. The World That Might Have Been: Accessible Fission Technology

One of the potential answers to the twenty-first century energy problem is nuclear power, but nuclear power is one of many nuclear technologies, and nuclear technologies taken together, had they been exploited at scale, might have been a transformative technology, both for the maintenance of industrialized civilization without fossil fuels, as well as for the transformation of our planetary industrialized civilization into a spacefaring civilization. Submarines and aircraft carriers are now routinely powered by fission reactors, and it would be possible to engineer fission reactors for railways and aircraft. Ford once proposed the Nucleon automobile, but this level of fission miniaturization is probably impractical. But the nuclearization of our infrastructure has stagnated. Once ambitious plans to build hundreds of nuclear reactors across the US were scrapped, and instead we find new natural gas generating plants under construction.

Darcy Ribeiro wrote of a “thermonuclear revolution” as one of many technological revolutions constituting civilizational processes that are, “…transformations in man’s ability to exploit nature or to make war that are prodigious enough to produce qualitative alterations in the whole way of life of societies.” [4] But if we do recognize thermonuclear technologies as revolutionary, we cannot identify them as having fulfilled their revolutionary function because of the stagnation of nuclearization. The promise and potential of nuclear technology never really got started, despite plans to the contrary.

While there were plans for the nuclear industry to be a major sector of the US economy, and these plans were largely derailed by construction costs that spiraled due to regulation, the nuclear industry thus conceived and thus derailed was always to be held under the watchful eye of the government and its nuclear regulation agencies. After the construction of nuclear weapons, it was too late to put the nuclear genie back in the bottle, but if the genie couldn’t be put back in the bottle, it could be shackled and placed under surveillance. The real worry was proliferation. If fissile materials become easily available, other nation-states would possess nuclear weapons sooner rather than later, and the post-war political imperative was to bring into being a less dangerous world. A world in which nuclear weapons were commonplace would be a far more dangerous world than that which preceded the Second World War, so that despite the division of the world by the Cold War, the one policy upon which almost all could agree was the tight control of fissile materials, hence the de facto constraints placed upon nuclear science, nuclear technology, and nuclear engineering. [5]

The human factor in technological development is essential, as in mythology. The details of a mythology may speak to one person and not another. So, too, a particular technological challenge may speak to one person and not to another. For those who might have had a special bent for nuclear technologies, their moment never arrived. At least two generations, perhaps three generations, of scientists, technologists, and engineers who would have dedicated their careers to the emerging and rapidly changing technology of nuclear rocketry and the application of nuclear technology to space systems, had to find another use for their talents. These careers that didn’t happen, and lives that didn’t unfold, can never be measured, but we should be haunted by the lost opportunity they represent. And perhaps we are haunted; this silent, unremarked loss would account for institutional drift and national malaise (i.e., stagnation) as readily as the absence of a mythology.

Even benign nuclear technologies that do not directly involve fissionable materials have suffered due to their expense. When funding for the SSC was cancelled (after an initial two billion had been spent), an entire generation of American scientists have had to go to CERN in Geneva because that is where the instrument is that allows for research at the frontiers of fundamental physics. There is only this single facility in the world for research into fundamental particle physics at the energy levels possible at the LHC. The expense of nuclear science has been another strike against its potential accessibility. Funding for scientific research is viewed as a zero-sum game, in which a new particle accelerator is understood to mean that another device does not get funded. Sabine Hossenfelder’s tireless campaign of questioning the construction of ever-larger particle accelerators takes place against this background of zero-sum funding of scientific research. But if science were growing exponentially, as industry grew exponentially during the industrial revolution, there would be few (or, at least, fewer) conflicts over funding scientific research.

Not only are nuclear technologies politically dangerous and expensive, nuclear technologies are also physically dangerous; extreme care must be taken so that nuclear materials do not kill their handlers. The “demon core” sphere of plutonium, which was slated to be the core of another implosion nuclear weapon (tentatively scheduled to be dropped August 19, but the Japanese surrendered on August 15), was responsible for the deaths of Harry Daghlian (due to an incident on 21 August 1945) and Louis Slotin (due to an incident on 21 May 1946) as they tested the core’s criticality. Fermi had warned Slotin that he would be dead within a year if he failed to maintain safety protocols, but apparently there was a certain thrill involved in “tickling the dragon’s tail.” The bravado of young men taking risks with dangerous technology is part of the risk/reward dialectic. Daghlian and Slotin were nuclear tinkerers, and it cost them their lives.

Generally speaking, industrial technologies are dangerous. The enormous machines of the early industrial revolution sometimes failed catastrophically, and took lives when they did so. Sometimes steam boilers exploded; sometimes trains jumped their tracks. Nuclear technologies are subject to dangers of this kind, as well as the unique dangers of the nuclear materials themselves. Because of this extreme danger—partly for reasons of personal safety, and partly for reasons of proliferation, which can be understood as social safety—nuclear reactors have developed toward a model of sealed containers that can operate nearly autonomously for long periods of time. [6] This limits hands-on experience with the technology and the ability to tinker with a functioning technology in order to improve efficiency and to make new discoveries.

There is a kind of dialectic in the development of technology since the development of scientific methods, such that the most advanced science of the day allows for new technological innovations, but once the technological innovations are made available to industry, thousands, perhaps tens of thousands or hundreds of thousands of individuals using the technology on a daily basis leads to a level of familiarity and practical know-how, which can then be employed to fine tune the use of the technology, and sometimes can be the basis of genuine technological innovations. Scientists design and build the prototypes of the technology, but engineers refine and improve the prototypes in industrial application, and this is a process more like tinkering than like science. So while the introduction of scientific method in the development of technology results in an inflection point in the development of technology (which is what the industrial revolution was), tinkering does not necessarily disappear and become irrelevant.

Because of the dangers of nuclear technologies, there is very little tinkering that goes on. Indeed, I suspect that the very idea of “nuclear tinkering” would send shudders down the spine of regulators and concerned citizens alike. And yet, it would be nuclear tinkering with a variety of different designs of nuclear rockets that would lead to a more effective and efficient use of nuclear technologies. As we noted with the steam engine, incremental improvements were made throughout the seventeenth and eighteenth centuries until the efficiency of James Watt’s steam engine became possible, and most of this was the result of tinkering rather than strictly scientific research, as the science of steam engines was not made explicit until Carnot’s book fifty years after Watt’s steam engine. In the case of nuclear technology, the fundamental science was accomplished first, and only later was that science engineered into specific nuclear technologies, which may be one of the factors that has limited hands-on engagement with nuclear technologies.

[Phoebus 1 A was part of the Rover Program to build a nuclear thermal rocket.]

6. Nuclear Rocketry as a Transformative Technology

Suppose that, for any spacefaring civilization, the key and indispensable technology is nuclear rocketry, or, we can say more generally, nuclear technology employed in spacecraft. Whether nuclear technology is employed in nuclear rockets, or in order to deliver megawatts of power in a relatively small package (e.g., to power an ion thruster), the use of nuclear fission could be a key means of harnessing of energies on a scale to enable space exploration with an accessible technology.

In what way is nuclear technology accessible? Human civilization has been making use of nuclear fission to generate electrical power (among other uses) for more than fifty years, all the while as research into nuclear fusion has continued. Nuclear fusion is proving to be a difficult technology to master. A century or two may separate the practical utility of fission power and fusion power. In historiographical terms, fission and fusion technologies may find themselves separated each into distinct longue durée periods — an Age of Fission and, later, an Age of Fusion. That means that nuclear fission technology is potentially available and accessible throughout a period of history during which nuclear fusion technology is not yet available or accessible.

How much could be achieved in one or two hundred years of unrestrained development of nuclear fission technology and its engineering applications? With an early spacefaring breakout, this could mean one or two hundred years of building a spacefaring civilization, all the while refining and improving nuclear fission technology in a way that is only possible when a large number of individuals are involved in an industry, with, say, two or more nuclear rocket manufacturers in competition, each trying to derive the best performance from their technology.

We know that the ideas were available in abundance for the exploitation of nuclear technology in space exploration. The early efflorescence of nuclear rocket designs has been exhaustively catalogued by Winchell Chung in his Atomic Rockets website, but this early enthusiasm for nuclear rocketry became a casualty of proliferation concerns. However, the imagination revealed early in the Atomic Age demonstrates that, had the opportunity been open, human creativity was equal to the challenge, and had this industry been allowed to grow, to develop, and to adapt, the present age would not have been one of stagnation.

In a steampunk kind of way, a spacefaring civilization of nuclear rocketry would in some structural ways resemble the early industrialized civilization of steam power. The nineteenth century industrial revolution was made possible by enormous machinery—steamships, steam locomotives, steam shovels (which made it possible to dig the Panama Canal), etc. A technological civilization that projected itself beyond Earth by nuclear rocketry would similarly be attended by enormous machinery. While fission reactors can be made somewhat compact, there are lower limits to practicality even for compact reactors, so that technologies enabled by the widespread exploitation of fission technology would be built at any scale that would be convenient and inexpensive. Nuclear powered spacecraft could open up the solar system to human beings, but these craft would likely be large and require a significant contingent of engineers and mechanics to keep them functioning safely and efficiently, much as steam locomotives and steamships required a large crew and numerous specializations to operate dependably.

[The bone needle, the moveable type printing press, and the steam engine]

7. Practical, Accessible, and Ubiquitous Technologies

We can summarize the technological indispensability hypothesis such that being a space-capable civilization is a necessary condition of being a spacefaring civilization, but a crucial spacefaring technology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization. What makes a spacefaring technology a sufficient condition for the transition from space-capable to spacefaring civilization is its practicality, its accessibility, and its ubiquity. A practical technology accomplishes its end with a minimum of complexity and difficulty; an accessible technology is affordable and adaptable; ubiquitous technologies are widely available with few barriers to acquisition. Stated otherwise, practical technologies don’t break down; accessible technologies can be repaired and modified; ubiquitous technologies are easy to buy, cheap, and plentiful.

Given the technological indispensability hypothesis, we can account for the drift of contemporary technological civilization by the absence of a key technology that would have allowed our civilization to take its next step forward, and we can further identify one technology—nuclear rocketry—as the absent key technology that, had it been exploited at the scale of steam engines in the nineteenth century or internal combustion engines in the twentieth century, would have resulted in a spacefaring breakout, and therefore a transformation of civilization.

None of this is inevitable, however. The mere existence of a technology is not, in itself, sufficient for a technology to transform a society. Some technologies, probably most technologies, are not intrinsically transformative. Of those technologies that are transformative, not all of these technologies have the potential to be practical, accessible, and ubiquitous. Of those technologies that are socially transformative and are practical, accessible, and ubiquitous, not all are sufficiently widely adopted to result in a transformational impact.

The list of technologies I cited earlier—among them, the bone needle, moveable type printing, and the steam engine—all were technologies that were transformative as well as being practical, accessible, and ubiquitous. The bone needle allowed for sewing form-fitting clothing during the last glacial maximum, therefore making it possible for human beings to expand across the entire surface of Earth. Movable type printing made books and pamphlets inexpensive and resulted in the exponential growth of knowledge; without inexpensive books and journals, the scientific revolution would not have made the impact that it did. Steam engines made the industrial revolution possible.

However, the existence of the technology alone is not sufficient; stated otherwise, it is not inevitable that a technology that is transformative will have the social impact that some of these technologies have had. The Chinese independently developed moveable type printing, and while this technology was in limited use, it did not revolutionize Chinese society. Chinese society stagnated in spite of possessing movable type printing technology. There are many possible explanations for this, first and foremost, the Chinese language itself may have required too many characters for movable type printing to be as effective a technology as it was for languages employing phonetic symbols with a smaller character set. In other words, the transformative technology of movable type printing may not have been practical and accessible using the Chinese character set; clearly it did not achieve ubiquity.

The example of the role of the Chinese language [7] in idea diffusion points to the possibility that a sequence of technologies (language is a technology of communication) may have to unfold in a particular order, with a civilization at each developmental juncture adopting a particular key technology (for linguistic technology, this might be a syllabary or a phonetic script), in order for later transformative events in civilization to occur. Formulated otherwise, transformative changes in civilization, like the industrial revolution, or a spacefaring breakout if that were to occur, may be metaphorically compared to inserting a key into the lock, such that each successive tumbler must be positioned in a particular way in order to finally unlock the mechanism.

In light of the above, we can reformulate the technological indispensability thesis such that a key spacefaring technology is the sufficient condition that facilitates the transition from space-capable to spacefaring civilization, but this crucial spacefaring technology must supervene upon the adoption of earlier technologies that facilitate and serve as the foundation for later spacefaring technology. We can call this the strong technological indispensability hypothesis, as it refers to technology alone as the transformative catalyst in civilizational change. The fact that the existence a technology alone does not inevitably result in its industrial exploitation once again points to the role of social factors—what I would call a sufficient mythological basis for the exploitation of a technology. In a weak formulation of the technological indispensability hypothesis, a sequence of technologies must be available, but it is a mythological trigger that leads to their exploitation. Here technology is still central to the historical process, but it must be supplemented by mythology. If we take this mythological supplement to be the sufficient condition for a spacefaring breakout, then we are back at the argument I made in Bound in Shallows.

We needn’t, of course, focus on any single causal factor, such as technology. It may be both the absence of a key technology and the absence of a key mythology. Just as the absence of a mythology may have been a factor that kept the technology from being exploited, the absence of the technology may have been a factor in limiting the mythological elaboration of its role in society. Much that I have written above about technology could be applied, mutatis mutandis, to mythology: a key mythology may need to develop organically out of previous mythologies, so that if a particular mythological tradition is absent, or develops in a different way, it cannot become the mythology that would superintend the expansion of a civilization beyond Earth. Moreover, these developments in technology and mythology may need to occur in parallel, so that it is like two keys inserted into two locks, each lining up each successive tumblers in a particular orientation—like launching a nuclear missile.

[Princeton Plasma Physics Laboratory, PFRC-2]

8. The Potential for an Age of Fusion Technology

Can we skip a stage of technological development? Can we make the transition directly from our fossil-fueled economy to a fusion-based economy, without passing through the stage of the thermonuclear revolution? Or should we regard the development of fusion technologies to be an extension of, and perhaps even the fulfillment of, the thermonuclear revolution?

Part of the promise of fusion is that it does not require fissile materials and so does not fall under the interdict that cripples the development of fission technologies, but fusion technology is not without its dangers; the promise of fusion technology is balanced by its problems. One can gain an appreciation of the complexity and difficulty of fusion engineering from a pessimistic article by Daniel Jassby, “Fusion reactors: Not what they’re cracked up to be,” which, in addition to discussing the problems of making fusion work as an energy source, also notes that the neutron flux from deuterium-tritium fusion could be used to enrich uranium 238 into plutonium 239, so that fusion does not eliminate the nuclear proliferation problem (although, presumably, continued tight control of uranium could obtain similar non-proliferation outcomes as is today the case with fission). Of course, for every pessimist there is an optimist, and there are plenty of optimists for the future of fusion.

While fusion technology would not necessarily involve fissionable material, and therefore would facilitate the construction of nuclear weapons to a lesser degree than fission technologies, the capabilities that widespread exploitation of fusion technology would put into the hands of human beings would scarcely be any less frightening than nuclear weapons. In this sense, the problem of nuclear weapons proliferation is only a stand-in for a more general problem of proliferation that follows from any technological advance, as any technology that enhances human agency also enhances the ability of human beings to wage war and to commit atrocities. Biotechnology, for example, also places potentially catastrophic powers into the hands of human beings. Nuclear weapons finally pushed human agency over the threshold of human extinction and so prompted a response—international non-proliferation efforts—but this problem will re-appear whenever a technology reaches a given level of development. Will each successive technological development that pushes human agency over the threshold of human extinction provoke a similar response? And is this a mechanism that limits the technological development of civilizations generally, so that this can be extrapolated as a response to the Fermi paradox?

It may be possible that humanity skips the stage of development that would have been represented by the widespread exploitation of thermonuclear technology (here understood as fission technologies), but this skipping a stage comes with an opportunity cost: everything that might have been achieved in the meantime through thermonuclear technologies is delayed until fusion technologies can be made sufficiently practical, accessible, and ubiquitous. But because of the severe engineering challenges of fusion, the mastery of fusion technology will greatly enhance human agency, and as such it will eventually suggest the possibility of human extinction by means of the weaponization of fusion technologies, and so bring itself under a regime of tight control that would ensure that fusion technologies never achieve a transformative role in civilization because it never becomes practical, accessible, and ubiquitous.

[Mercury-vapor, fluorescent, and incandescent electrical lighting technologies]

9. Indispensability and Fungibility

The technological indispensability hypothesis implies its opposite number, which is the technological fungibility hypothesis: no technology, certainly no one, single technology, is the key to a transformative change in civilization. But what does it mean for a technology to be one technology? Are there not classes of related technologies? How do we distinguish technologies or classes of technologies?

One could argue that some particular technology is necessary to advance a civilization to a new stage of complexity, but that the nature of technology is such that, if one technology is not available (i.e., some putatively key technology is absent), some other technology will serve as well, or almost as well. If we cannot build nuclear rockets due to proliferation concerns, then we can build reusable chemical rockets and ion thrusters and solar sails. Under this interpretation, no single technology is key; what matters is how effectively some given technology is exploited.

Arguments such as this appear frequently in discussions of the ability of civilization to be rebuilt after a catastrophic failure. Some have argued that our near exhaustion of fossil fuels means that if our present industrialized civilization fails, there will be no second chance on Earth for a spacefaring breakout, because fossil fuels are a necessary condition for industrialization (and, by extension, a necessary condition for fossil fuel technologies like steam engines and internal combustion engines that are key technologies for industrialization). We have picked the low-hanging fruit of fossil fuels, so that any subsequent industrialization would have to do without them. [8]

In order to do justice to the technological fungibility hypothesis it would be necessary to formulate a thorough and rigorous distinction between technologies and engineering solutions to technological problems. This in turn would require an exhaustive taxonomy of technology. Is electric lighting a technology, while mercury-vapor lamps and fluorescent bulbs are two distinct engineering solutions to the same technological problem, or do we need to be much more specific and identify incandescent light bulbs as a single technology, with the different materials used to construct the filament being distinct engineering solutions to the technological problems posed by incandescent bulb design? If the latter, is electrical lighting then a class of technologies? Should we distinguish fungibility within a single technology (i.e., the diverse engineering expressions of one technology) or within a class of technologies? Without such a technological taxonomy, we are comparing apples to oranges, and we cannot distinguish between technological indispensability and technological fungibility.

These arguments about the fungibility of technology in industrialization also points to a parallel treatment for mythology: mythologies, too, may be fungible, and if a given mythology is not available in a culture, another could serve the same function as well.

[Wilhelm Windelband, 1848-1915]

10. Four Hypotheses on Spacefaring Breakout

We are now in a position to distinguish four hypotheses for an historiographical explanation for a spacefaring breakout, and, by extension, for other macrohistorical transformations of civilization (beyond a narrow focus on spacefaring mythology and spacefaring technology):

  • The Mythological Indispensability Hypothesis: a key mythology is a sufficient condition for a transformation of civilization.
  • The Mythological Fungibility Hypothesis: some mythology is a sufficient condition for a transformation of civilization, but there are many such peer mythologies.
  • The Technological Indispensability Hypothesis: a key technology is a sufficient condition for a transformation of civilization.
  • The Technological Fungibility Hypothesis: some technology is a sufficient condition for a transformation of civilization, but there are many such peer technologies.

Each of these hypotheses can be given a strong form and a weak form, yielding eight permutations: strong permutations of the hypotheses are formulated in terms of a single cause; weak permutations of the hypotheses are formulated in terms of multiple causes, though one cause may predominate.

I began this essay with the assertion that civilization is the largest, the longest lived, and the most complex institution that human beings have built. This makes maintaining any hypothesis about civilization difficult, but not, I think, impossible. We cannot grow civilizations in the laboratory, and we cannot experiment with civilizations in any meaningful way. However, we can learn to observe civilizations under controlled conditions, even if we cannot control what will be the dependent variable and what the independent variable.

History is the record of controlled observation of civilization (or an implicit attempt at such), but history leaves much to be desired in terms of scientific rigor. Explicitly coming to understand history as a controlled observation of civilization would require a transformation of how history is pursued as a discipline. The conceptual framework required for this transformation does not yet exist, so we cannot pursue history in this way at the present time, but we can contribute to the formulation of the conceptual framework that will make it possible to pursue history as the controlled observation of civilization in the future.

This process of transforming the conceptual framework of history must follow the time-tested path of the sciences: making our assumptions explicit, making the principles by which we reason explicit, employing only evidence collected under controlled conditions, and so on. Another crucial element, less widely recognized, is that of formulating a conceptual framework that employs concepts of the proper degree of scientific abstraction, something I have previously discussed in Scientific Knowledge and Scientific Abtraction. This latter is perhaps the greatest hurdle for history, which has been understood as a concretely idiographic form of knowledge, in contradistinction to the nomothetic forms of knowledge of the natural sciences. [9]

In a future essay I will argue that history is intrinsically a big picture discipline, so that it must employ big picture concepts, which would make of history the antithesis of the idiographic. Moreover, there is no extant epistemology of big picture concepts (which we can also call overview concepts) that recognizes their distinctiveness and theoretically distinguishes them from smaller scale concepts, and this means that a transformation of history is predicated upon the formulation of an adequate epistemology that can clearly delineate a body of historical knowledge. In order to assess the hypotheses formulated above, it will be necessary to supply these missing elements of historical thought.

Notes

[1] I discussed Gilbert Murray on the failure of nerve in an earlier Centauri Dreams post, Where Do We Come From? What Are We? Where Are We Going?

[2] The largest internal combustion engine is the Wärtsilä-Sulzer RTA96-C; one of the remarkable things about this engine is how closely it resembles the construction of an internal combustion engine you would find in any conventional automobile.

[3] The Tellus Institute describes eco-communalism as follows: “… the green vision of bio-regionalism, localism, face-to-face democracy, small technology, and economic autarky. The emergence of a patchwork of self-sustaining communities from our increasingly interdependent world, although a strong current in some environmental and anarchist subcultures seems implausible, except in recovery from collapse.”

[4] Darcy Ribeiro, The Civilizational Process, Washington: Smithsonian Institution Press, 1968, p. 13.

[5] I have previously examined this idea in Trading Existential Opportunity for Existential Risk Mitigation: a Thought Experiment, where I posed the choice between the exploitation of nuclear technologies or the containment of nuclear technologies as a thought experiment.

[6] The newest reactor under development for the next class of US nuclear submarines, the S1B reactor, will be designed to operate for 40 years without refueling.

[7] Civilizations can and have changed their languages in order to secure greater efficiency in communication, and therefore idea diffusion. Mainland China has adopted a simplified character set. Both Japanese Kanji characters and traditional Korean characters were based on traditional Chinese models; the Japanese developed two alternative writing systems, Katakana and Hiragana (both of which are premodern in origin); the Koreans developed Hangul, credited to Sejong the Great in 1443. Under Atatürk, the Turks abandoned the Arabic script and adopted a Latin character set. Almost every civilization has adopted Hindu-Arabic numerals for mathematics.

[8] I have addressed this in answer to a question on Quora: If our civilization collapsed to pre-Industrial; do we have sufficient resources to recover (repeat the Industrial Revolution) to high tech? Or do we need to get into space on this go?

[9] On the distinction between the idiographic and the nomothetic cf. Windelband, Wilhelm, “Rectorial Address, Strasbourg, 1894,” History and Theory, Vol. 19, No. 2 (Feb., 1980), pp. 169-185.

tzf_img_post
{ 118 comments }

K2-315b: Tight Orbits and the Joy of Numbers

The newly found planet K2-315b catches the eye because of its 3.14-day orbit, a catch from the K2 extension of the Kepler Space Telescope mission that reminds us of a mathematical constant. As I’m prowling through David Berlinski’s Infinite Ascent (Modern Library, 2011), a quirky and quite lively history of mathematics at the moment, the references to ‘pi in the sky’ that I’m seeing in coverage of the discovery are worth a chuckle. Maybe the Pythagoreans were right that everything is number. Pythagoras would have loved K2-315b and would have speculated on its nature.

After all, as Berlinski notes about Pythagoras (ca. 570 to ca. 490 BCE) and his followers, they were devoted to what he calls ‘a higher spookiness”:

The Pythagoreans never succeeded in explaining what they meant by claiming that number is the essence of all things. Early in the life of the sect, they conjectured that numbers might be the essence of all things because quite literally “the elements of numbers were the elements of all things.” In this way, Aristotle remarks, “they constructed the whole heaven out of numbers.” This view they could not sustain. Aristotle notes dryly that “it is impossible that [physical] bodies should consist of numbers,” if only because physical bodies are in motion and numbers are not. At some time, the intellectual allegiances of the sect changed and the Pythagoreans began to draw a most Platonic distinction between the world revealed by the senses and the world revealed by the intellect.

And we’re off into weird metaphysics, down a historical rabbit hole. But enough of the Pythagorean buzz with numbers remains that to this day we love the odd coincidence. Hey, K2-315b is the 315th planetary system discovered inside the K2 data, a near miss from 314. MIT’s Julien de Wit, a co-author of the paper on this discovery, points out that “everyone needs a bit of fun these days,” and it’s a reference to the paper’s playful title: “π Earth: A 3.14 day Earth-sized Planet from K2’s Kitchen Served Warm by the SPECULOOS Team.” MIT graduate student Prajwal Niraula is lead author of the paper, published in the Astronomical Journal.

Image: Scientists at MIT and elsewhere have discovered an Earth-sized planet that zips around its star every 3.14 days. Credit: NASA Ames/JPL-Caltech/T. Pyle, Christine Daniloff, MIT.

What we know about K2-315b is that its radius is about 0.95 that of Earth and, importantly, that it orbits a cool, low-mass star about a fifth of the Sun’s size. Its mass has yet to be determined, but as MIT press materials point out, its surface temperature is around 450 K, which is about where you want your oven to be if you’re baking an actual pie. There is little likelihood of any lifeforms on this planet capable of groaning at puns, though I do think the discovery is helpful because it’s yet another case of an ultracool dwarf star that may be a target for the James Webb Space Telescope. Large transit depths make for interesting studies of planetary atmospheres.

I try to keep up with SPECULOOS, another wonderful acronym: Search for habitable Planets EClipsing ULtra-cOOl Stars. Here we’re dealing with four 1-meter telescopes at Chile’s Paranal Observatory in the Atacama Desert, and a more recently included fifth instrument called Artemis in Tenerife, Spain. The observing effort is led by Michael Gillon (University of Liège, Belgium) and conducted in collaboration with various institutions including MIT and the University of Bern, along with the Canary Islands Institute of Astrophysics and the European Southern Observatory.

Image: The SPECULOOS project aims to detect terrestrial planets eclipsing some of the smallest and coolest stars of the solar neighborhood. This strategy is motivated by the unique possibility to study these planets in detail with future giant observatories like the European Extremely Large Telescope (E-ELT) or the James Webb Space Telescope (JWST). The exoplanets discovered by SPECULOOS should thus provide mankind with an opportunity to study the atmosphere of extrasolar worlds similar in size to our Earth, notably to search for traces of biological activity. Credit: SPECULOOS.

The K2-315b work spanned several months of K2 observation from 2017 in which 20 transit signatures turned up with a repetition of 3.14 days. At this point, closer examination relied upon tightening the transit time even further, as co-author Benjamin Rackham points out:

“Nailing down the best night to follow up from the ground is a little bit tricky. Even when you see this 3.14 day signal in the K2 data, there’s an uncertainty to that, which adds up with every orbit.”

Fortunately, Rackham had developed a forecasting algorithm to pin the transits down, and subsequent observations in February of 2020 with the SPECULOOS telescopes nailed three transits, one from Artemis in Spain and the other two from the Paranal instruments. The paper points out that differences in atmospheric “mean molecular mass, surface pressure, and/or
cloud/haze altitude will strongly affect the actual potential of a planet for characterization,” with ramifications for the study even of promising worlds like those circling TRAPPIST-1.

Nonetheless, K2-315b (referred to in the K2 data as EPIC 249631677) looks intriguing enough for JWST observations to be considered:

With an estimated radial velocity semi-amplitude of 1.3 m s−1 (assuming a mass comparable to that of Earth), the planet could be accessible for mass measurements using modern ultra-precise radial velocity instruments. Such possibilities and a ranking amongst the 10 best-suited Earth-sized planets for atmospheric study, EPIC 249631677 b will therefore play an important role in the upcoming era of comparative exoplanetology for terrestrial worlds. It will surely be a prime target for the generation of observatories to follow JWST and bring the field fully into this new era.

Note that reference to ‘comparative exoplanetology.’ Not all exoplanets singled out for atmospheric characterization are going to be ‘habitable’ in the sense of life as we know it. After all, we began using transmission spectroscopy to study atmospheres by working with ‘hot Jupiters’ like HD 209458b. We learn as we go, and firming up our methods by studying small planets around ultracool dwarf stars within 100 parsecs or so is part of the path toward finding a living world.

The paper is Niraula et al., “π Earth: a 3.14-day Earth-sized Planet from K2’s Kitchen Served Warm by the SPECULOOS Team,” Astronomical Journal Vol. 160, No. 4 (21 September 2020). Abstract / Preprint.

tzf_img_post
{ 32 comments }

Radar for a Giant Planet’s Moons

One of my better memories involving space exploration is getting the chance to be at the Jet Propulsion Laboratory to see the Mars rovers Spirit and Opportunity just days before they were shipped off to Florida for their eventual launch. Being near an object that, though crafted by human hands, is about to be a presence on another world is an unusual experience, one that made me reflect on artifacts from deep in the human past and their excavation by archaeologists today. Will future humans one day recover our early robotic explorers?

That reflection was prompted by news from JPL that engineers have delivered the key elements of a critical ice-penetrating radar instrument for the European Space Agency’s mission to three of Jupiter’s icy moons. JUICE — JUpiter ICy moons Explorer — is scheduled for a launch in 2022, with plans to orbit Jupiter for three years, involving multiple flybys of both Europa and Callisto, with eventual orbital insertion at Ganymede. Analyses of the interiors as well as surfaces of the three moons should vastly improve our knowledge of their composition.

Image: NASA’s Jet Propulsion Laboratory built and shipped the receiver, transmitter, and electronics necessary to complete the radar instrument for Jupiter Icy Moons Explorer (JUICE), the ESA (European Space Agency) mission to explore Jupiter and its three large icy moons. In this photo, shot at JPL on April 27, 2020, the transmitter undergoes random vibration testing to ensure the instrument can survive the shaking that comes with launch. Credit: NASA/JPL-Caltech.

Here again we’re looking at something in the hands of humans on Earth that will one day move out beyond our orbit, in this case to the moons of our system’s largest planet, sending back priceless data. On a practical level, this is what people in the space exploration business do. On the level of sheer human response, my own at least, looking at how we build our spacecraft puts a bit of a chill up my spine, the good kind of chill that signals being in the presence of something profound, something caught up in what seems a hard-wired human need to explore.

The words “ice-penetrating radar” should resonate among all of those who wonder about the ocean under the ice at Europa. But of course we also have reason to believe that both Ganymede and Callisto have oceans whose depths we have yet to measure. Getting a sense for how thick the ice is on these worlds will be part of what the JUICE mission’s RIME instrument will, we can hope, deliver. RIME — Radar for Icy Moon Exploration — is said to have the capability of sending out radio waves that can penetrate up to 10 kilometers deep, reflecting off subsurface features and helping us figure out the thickness of the ice.

Image: The Radar for Icy Moon Exploration, or RIME, instrument is a collaboration by JPL and the Italian Space Agency (ASI) and is one of ten that will fly aboard JUICE. This photo, shot at JPL on July 23, 2020, shows the transmitter as it exits a thermal vacuum chamber. The test is one of several designed to ensure the hardware can survive the conditions of space travel. The thermal chamber simulates deep space by creating a vacuum and by varying the temperatures to match those the instrument will experience over the life of the mission. Credit: NASA/JPL-Caltech.

And as we all know, work on anything these days is complicated by COVID-19, with many JPL employees forced to work remotely, and necessary delays to equipment testing including vibration, shock and thermal vacuum tests to ensure the equipment is ready for the deep space environment. The engineers returning to work after the delay under new safety protocols faced a tight schedule, but they made it work. JPL delivered the transmitter and receiver for RIME along with electronics necessary for communicating with its antenna.

All this occurs as part of a collaboration between JPL and the Italian Space Agency (ASI). The RIME instrument is led by principal investigator Lorenzo Bruzzone (University of Trento, Italy). As to JPL’s role under trying pandemic conditions, co-principal investigator Jeffrey Plaut says:

“I’m really impressed that the engineers working on this project were able to pull this off. We are so proud of them, because it was incredibly challenging. We had a commitment to our partners overseas, and we met that – which is very gratifying.”

Gratifying indeed, and a reminder that along with JUICE, we can also anticipate NASA’s Europa Clipper, set to launch some time in the mid-2020s. Europa Clipper should arrive about the same time as JUICE, and will perform multiple flybys of Europa. Will we be able to determine the thickness of Europa’s frozen surface from the combined data of both missions? A relatively thin crust would make for the possibility of eventual penetration by instruments for a look at what lies beneath, but a shell of 15 to 25 kilometers in thickness would call for other strategies.

Image: The European Space Agency (ESA) Jupiter Icy Moons Explorer (JUICE) spacecraft explores the Jovian system in this illustration. Credit: ESA/NASA/ATG medialab/University of Leicester/DLR/JPL-Caltech/University of Arizona.

tzf_img_post
{ 16 comments }

On White Dwarf Planets as Biosignature Targets

So often a discovery sets off a follow-up study that strikes me as even more significant in practical terms. This is not for a moment to downplay the accomplishment of Andrew Vanderburg (University of Wisconsin – Madison) and team that discovered a planet in close orbit around a white dwarf. This is the first time we’ve found a planet that has survived its star’s red giant phase and remains in orbit around the remnant, and quite a tight orbit at that. Previously, we’ve had good evidence only of atmospheric pollution in such stars, indicating infalling material from possible asteroids or other objects during the primary’s cataclysmic re-configuration.

The white dwarf planet, found via data gathered from TESS (Transiting Exoplanet Survey Satellite) and the Spitzer Space Telescope, makes for quite a discovery. But coming out of this work, I also love the idea of studying such a world with tools we’re likely to have soon, such as the James Webb Space Telescope, and on that score, Lisa Kaltenegger (Carl Sagan Institute, Cornell University), working with Ryan MacDonald and including Vanderburg in the team, have shown us how JWST can identify chemical signatures in the atmospheres of possible Earth-like planets around white dwarf stars. Assuming we find such, and I suspect we will.

The planet at the white dwarf WD 1856+534 is anything but Earth-like. It’s running around the star every 34 hours, which means it’s on a pace 60 times faster than Mercury orbits the Sun. The planet here is also the size of Jupiter, and what a system we’ve uncovered — the new world orbits a star that is itself only 40 percent larger than Earth (imagine the transit depth possible with white dwarfs transited by a gas giant!) In this planetary system, the planet we’ve detected is about deven times larger than its primary. Says Vanderburg:

“WD 1856 b somehow got very close to its white dwarf and managed to stay in one piece. The white dwarf creation process destroys nearby planets, and anything that later gets too close is usually torn apart by the star’s immense gravity. We still have many questions about how WD 1856 b arrived at its current location without meeting one of those fates.”

Image: In this illustration, WD 1856b, a potential Jupiter-size planet, orbits its dim white dwarf star every day-and-a-half. WD 1856 b is nearly seven times larger than the white dwarf it orbits. Astronomers discovered it using data from NASA’s Transiting Exoplanet Survey Satellite (TESS) and now-retired Spitzer Space Telescope. Credit: NASA GSFC.

So on the immediate question of WD 1856 b, let’s note that we have a serious issue with explaining how the planet got to be this close to the white dwarf in the first place. White dwarfs form when stars like the Sun swell into red giant status as they run out of fuel, a phase in which 80 percent of the star’s mass is ejected, leaving a hot core — the white dwarf — behind. Anything on relatively close orbit would be presumably swallowed up in the stellar expansion phase.

Which is why Vanderburg’s team believes the planet probably formed fully 50 times farther away from its present location, later moving inward perhaps through interactions with other large bodies close to the planet’s original orbit, with its orbit circularizing as tidal forces dissipated. Such instabilities could bring a planet inward, as could other scenarios involving the red dwarfs G229-20 A and B in this triple star system, although the paper plays down this idea, as well as the notion of a rogue star acting as a perturber. Other Jupiter-like planets, presumably long gone, seem to be the best bet to explain this configuration.

From the paper:

…a more probable formation history is that WD 1856 b was a planet that underwent dynamical instability. It is well established that when stars evolve into white dwarfs, their previously stable planetary systems can undergo violent dynamical interactions that excite high orbital eccentricities. We have confirmed with our own simulations that WD 1856 b-like objects in multi-planet systems can be thrown onto orbits with very close periastron distances. If WD 1856 b were on such an orbit, the orbital energy would have rapidly dissipated, owing to tides raised on the planet by the white dwarf. The final state of minimum energy would be a circular, short-period orbit. The advanced age of WD 1856 (around 5.85 Gyr) gives plenty of time for these relatively slow (of the order of Gyr) dynamical processes to take place. In this case, it is no coincidence that WD 1856 is one of the oldest white dwarfs observed by TESS.

Did you catch that reference to the white dwarf’s age? The 5.85 billion year frame gives ample opportunity for such orbital adjustments to take place, winding up with the observed orbit. Or perhaps we’re dealing with interactions with a debris disk around the star, as co-author Stephen Kane (UC-Riverside, and a member of the TESS science team) hypothesizes:

“In this case, it’s possible that a debris disc could have formed from ejected material as the star changed from red giant to white dwarf. Or, on a more cannibalistic note, the disc could have formed from the debris of other planets that were torn apart by powerful gravitational tides from the white dwarf. The disc itself may have long since dissipated.”

But back to Lisa Kaltenegger, lead author of a paper in Astrophysical Journal Letters probing whether an exposed stellar core — a white dwarf — would be workable as a target for the JWST, in which case we would like to look at planetary atmospheres to probe for the possibility of biosignatures. Here the news is good, for Kaltenegger believes that such detections would be possible, assuming rocky planets exist around these stars. WD 1856 b gives hope that such a world could exist in the white dwarf’s habitable zone for a period longer than the time it took for life to develop on Earth. The implications are intriguing:

“What if the death of the star is not the end for life?” Kaltenegger said. “Could life go on, even once our sun has died? Signs of life on planets orbiting white dwarfs would not only show the incredible tenacity of life, but perhaps also a glimpse into our future.”

Image: In newly published research, Cornell researchers show how NASA’s upcoming James Webb Space Telescope could find signatures of life on Earth-like planets orbiting burned-out stars, known as white dwarfs. Credit: Jack Madden/Carl Sagan Institute.

The Kaltenegger team used methods developed to study gas giant atmospheres and combined them with computer models configured to apply the technique to small, rocky white dwarf planets. The researchers found that JWST, when observing an Earth-class planet around a white dwarf, could detect carbon dioxide and water with data from as few as 5 transits. According to co-lead author Ryan MacDonald, it would take a scant two days of observing time with JWST to probe for the classic biosignature gases ozone and methane. Adds MacDonald:

“We know now that giant planets can exist around white dwarfs, and evidence stretches back over 100 years showing rocky material polluting light from white dwarfs. There are certainly small rocks in white dwarf systems. It’s a logical leap to imagine a rocky planet like the Earth orbiting a white dwarf.”

So we have a possible target we’ll want to add into the exoplanet mix when it comes to nearby white dwarf systems. WD 1856 is about 80 light years out in the direction of Draco. The white dwarf formed over 5 billion years ago, as noted in the paper, but the age of the original Sun-like star may take us back as much as 10 billion years. The post red giant phase allows plenty of time for orbital adjustment, drawing rocky worlds inward and circularizing their orbit. Will we find such planets in this setting in the near future? The hunt for such will surely intensify.

The paper is Vanderburg et al., “A giant planet candidate transiting a white dwarf,” Nature 585 (16 September 2020), 363-367 (abstract). The Kaltenegger paper is “The White Dwarf Opportunity: Robust Detections of Molecules in Earth-like Exoplanet Atmospheres with the James Webb Space Telescope,” Astrophysical Journal Letters Vol. 901, No. 1 (16 September 2020). Abstract.

tzf_img_post
{ 24 comments }

SETI and Altruism: A Dialogue with Keith Cooper

Keith Cooper’s The Contact Paradox is as thoroughgoing a look at the issues involved in SETI as I have seen in any one volume. After I finished it, I wrote to Keith, a Centauri Dreams contributor from way back, and we began a series of dialogues on SETI and other matters, the first of which ran here last February as Exploring the Contact Paradox. Below is a second installment of our exchanges, which were slowed by external factors at my end, but the correspondence continues. What can we infer from human traits about possible contact with an extraterrestrial culture? And how would we evaluate its level of intelligence? Keith is working on a new book involving both the Cosmic Microwave Background and quantum gravity, the research into which will likewise figure into our future musings that will include SETI but go even further afield.

Keith, in our last dialogue I mentioned a factor you singled out in your book The Contact Paradox as hugely significant in our consideration of SETI and possible contact scenarios. Let me quote you again: “Understanding altruism may ultimately be the single most significant factor in our quest to make contact with other intelligent life in the Universe.”

I think this is exactly right, but the reasons may not be apparent unless we take the statement apart. So let’s start today by talking about altruism before we explore the question of ‘deep time’ and how our species sees itself in the cosmos. I think we have ramifications here for how we deal not only with extraterrestrial contact but issues within our own civilization.

I’m puzzled by the seemingly ready acceptance of the notion that any extraterrestrial civilization will be altruistic or it could not have survived. Perhaps it’s true, but it seems anthropocentric given our lack of knowledge of any life beyond Earth. What, then, did you mean with your statement, and why is understanding altruism a key to our perception of contact?

  • Keith Cooper

I think so much that is integral to SETI comes down to our assumptions about altruism. How often do we hear that an older extraterrestrial society will be altruistic, as though it’s the end result of some kind of evolutionary trajectory. But there’s several problems with this. One is that the person making such claims – usually an astrophysicist straying into areas outside their field of expertise – is often conflating ‘altruism’ with ‘being nice’.

And sure, maybe aliens are nice. I kind of get the logic, even though it’s faulty. The argument is that if they are still around then they must have abandoned war long ago, otherwise they would have destroyed themselves by now, ergo they must be peaceful.

And it’s entirely possible, I suppose, that a civilisation may have developed in that direction. In The Better Angels of Our Nature, Steven Pinker attempted to argue that our civilization is becoming more peaceable over time, although Pinker’s analysis and conclusions have been called into question by numerous academics.

  • Paul Gilster

I hope so. I think the notion is facile at best.

  • Keith Cooper

It’s what human societies should always aim for, I truly believe that, but whether we can achieve it or not is another question. When it comes to SETI, we seem to home in on the most simplistic definitions of what an extraterrestrial society might be like – ‘they’ve survived this long, they must be peaceful’. A xenophobic civilization might be at peace with its own species, but malevolent towards life on other planets. A planet could be at peace, but that peace could be implemented by some 1984-style dystopian dictatorship where nobody is free. Neither of which is particularly ‘nice’, and we could think of many other scenarios, too.

Nevertheless, this myth of wise, kindly aliens has grown up around SETI – that was the expectation, 60 years ago, that ET would be pouring resources into powerful beacons to make it easy for us to detect them. To transmit far and wide across the Galaxy, and to maintain those transmissions for centuries, millennia, maybe even millions of years, would require huge amounts of resources. When we consider that the aliens may not even know for sure whether they share the Universe with other life, it’s a huge gamble on their part to sacrifice so much time and energy in trying to communicate with others in the Universe.

If we look at what altruism really is, and how that may play into the likelihood that ET will want to beam messages across the Galaxy given the cost in time and energy, then it poses a big problem for SETI. ET really needs to help us out – to display a remarkable degree of selfless altruism towards us – by plowing all those resources into transmitting signals that we’ll be able to detect.

One of the forms that altruism can take in nature is kin selection. We can see how this has evolved: lifeforms want to ensure that their genes are passed on to later generations, so a parent will act to protect and give the greatest possible advantage to their child, or nieces and nephews. That’s a form of altruism predicated by genes, not ethics. Unless some form of extreme panspermia has been at play, alien life would not be our kin, so they would be unlikely to show us altruistic behaviour of this type.

  • Paul Gilster

But we haven’t exhausted all the forms altruism might take. Is there an expectation of mutual benefit that points in that direction?

  • Keith Cooper

Okay, so what about quid pro quo? That’s a form of reciprocal altruism. Consider, though, the time and distance separating the stars. It could take centuries or millennia for a message to reach a destination, and there’s no guarantee that anyone is going to hear that message, nor that they will send a reply. That’s a long time to wait for a return on an investment, if there even is a return. Why plow so many resources into transmitting if that’s the case? What’s in it for them?

So if kin selection and reciprocal altruism are not really tailored for interstellar communication, then it seems more unlikely that we will hear from aliens. Of course, there is always the possibility of exceptions to the rule, one-off reasons why a society might wish to broadcast its existence. Maybe ET wants to transmit a religious gospel to the stars to convert us all. Maybe they are about to go extinct and want to send one last hurrah into the Universe. But these would not be global reasons, and we shouldn’t expect alien societies to make it easy for us to discover them.

  • Paul Gilster

Good point. Why indeed should they want us to discover them? I can think of reasons a society might decide to broadcast its existence to the stars, though I admit that it’s a bit of a strain. But aliens are alien, right? So let’s assume some may want to do this. I like your mention of reciprocal altruism, as it’s conceivable that an urge to spread knowledge, for example, might result in a SETI beacon of some kind that points to an information resource, the fabled Encyclopedia Galactica. What a gorgeous dream that something like that might be out there.

Curiosity leads where curiosity leads. I wonder if it’s a universal trait of intelligence?

  • Keith Cooper

It’s interesting that you describe the Encyclopedia Galactica as a ‘dream’, because I think that’s exactly what it is, a fantasy that we’ve imagined without any strong rationale other than falling back on this outdated idea that aliens are going to act with selfless altruism. As David Brin argues, if you pump all your knowledge into space freely, what do you have left to barter with? And yet it is expectations such as receiving an Encyclopedia Galactica that still drive SETI and influence the kinds of signals that we search for. I really do think SETI needs to move on from this quaint idea. But I digress.

  • Paul Gilster

It’s certainly worth keeping up the SETI effort just to see what happens, especially when it’s privately funded. But I want to circle back around. I’ve always had an interest in what the general public’s reaction to the idea of extraterrestrial civilization really is. In the 16 years that I’ve been writing about this and talking to people, I’ve found a truly lopsided percentage that believe as a matter of course that an advanced civilization will be infinitely better than our own. This plays to a perceived disdain for human culture and a faith in a more beneficent alternative, even if it has to come from elsewhere to set right our fallen nature.

Put that way, it does sound a bit religious, but so what — I’m talking about how human beings react to an idea. Humans construct narratives, some of them scientific, some of them not.

I’m also talking about the general public, not people in the interstellar community, or scientists actively working on these matters. As you would imagine with COVID about, I’m not making many talks these days, but when I was fairly active, I’d always ask audiences of lay people what they thought of intelligent aliens. The reaction was almost always along two lines: 1) The idea used to seem crazy, but now we know it’s not. And 2) it would be something like an European Renaissance all over again if we made contact, because they would have so much to teach us.

A golden age, with its Dantes and Shakespeares and Leonardos. Or think of the explosion of Chinese culture and innovation in the Tang Dynasty, or Meiji Japan, all this propelled by the infusion not of recovered ancient literature and teaching, as in the European example, but materials discovered in the evidently limitless databanks of the Encyclopedia Galactica.

I ran into these audience reactions so frequently in both talks to interested audiences and just conversations among neighbors and friends that I had to ask what was propelling the Hollywood tradition of scary movies about alien invasion? What about Independence Day, with its monstrous ships crushing the life out of our planet? So I would ask, if you believe all this altruistic stuff, why do you keep going to these sensational movies of death and destruction?

The answer: Because people think they’re fun. They’re a good diversion, a comic book tale, a late night horror movie where getting scared is the point. Whole film franchises are built around the idea that fear is addictive when experienced within the cocoon of a home or theater. Thus the wave of horror fiction that has been so prominent in recent years. It’s because people like being scared, and the reason for that goes a lot deeper into psychiatry than I would know how to go. I admit I may not believe in Cthulhu, but I love going to Dunwich with H. P. Lovecraft.

Keith, as we both know — and you, as the author of The Contact Paradox would know a lot more about this than I do — there is an active lobby against messaging to the stars: METI. I’ve expressed my own opposition to METI on many an occasion in these pages, and the discussion has always been robust and contentious, with the evidently minority position being that we should hold back on such broadcasts unless we reach international consensus, and the majority position being that it doesn’t matter because sufficiently intelligent aliens already know about us anyway.

I don’t want to re-litigate any of that here. Rather, I just want to note that if the anti-METI position gets loud pushback in the interstellar community, it gets even louder pushback among the general public. In my talks, bringing up the dangers of METI invariably causes people to accuse me of taking films like Independence Day too seriously. From what I can see from my own experience, most people think ETI may be out there but assume that if it ever shows up on our doorstep, it will represent a refined, sophisticated, and peaceful culture.

I don’t buy that idea, but I’m so used to seeing it in print that I was startled to read this in James Trefil and Michael Summers’ recent book Imagined Life. The two first tell a tale:

Two hikers in the mountains encounter an obviously hungry grizzly bear. One of the hikers starts to shed his backpack. The other says, “What are you doing? You can’t run faster than that bear.”

“I don’t have to run faster than the bear — I just have to run faster than you.”

Natural selection doesn’t select for bonhomie or moral hair-splitting. The one whose genes will survive in the above encounter is the faster runner. Trefil and Summers go on:

So what does this tell us about the types of life forms that will develop on Goldilocks worlds? We’re afraid that the answer isn’t very encouraging, for the most likely outcome is that they will probably be no more gentle and kind than Homo Sapiens. Looking at the history of our species and the disappearance of over 20 species of hominids that have been discovered in the fossil record, we cannot assume we will encounter an advanced technological species that is more peaceful than we are. Anyone we find out there will most likely be no more moral or less warlike that we are…

That doesn’t mean any ETI we find will try to destroy us, but it does give me pause when contemplating the platitudes of the original The Day the Earth Stood Still movie, for example. It’s so easy to point to our obvious flaws as humans, but the more likely encounter with ETI, if we ever meet them face to face, will probably be deeply enigmatic and perhaps never truly understood. I also argue that there is no reason to assume that individual members of a given species will not have as much variation between them as do individual humans.

It’s a long way from Francis of Assisi to Joseph Goebbels, but both were human. So what happens, Keith, if we do get a SETI signal one day. And then, a few days later, another one that says, “Disregard that first message. The one you want to talk to is me?”

  • Keith Cooper

I’m hesitant to rely too closely on comparisons with ourselves and our own evolution, since ultimately we are just a sample of one, and we could be atypical for all we know. I see what Trefil and Summers are saying, but equally I could imagine a world, perhaps with a hostile environment, where species have to work together to survive. Instead of survival of the fittest, it becomes survival of those who cooperate. And suppose intelligent life evolves to be post-biological. What role do evolutionary hangovers play then?

I think the most we can say is that we don’t know, but that for me is enough of a reason to be cautious both about the assumptions we make in SETI, and about the possible consequences of METI.

But you’re right about our flawed assumption that aliens will exist in a monolithic culture. Unless there’s some kind of hive mind or network, there will likely be variation and dissonance, and different members of their species may have different reactions to us.

If we detected two beacons in the same system, I think that would be great! Why? Because it would give us more information about them than a single signal would. Since we will have no knowledge of their language, their culture, their history or their biology, being able to understand their message in even the most general sense is going to be exceptionally difficult.

So, if we detect a signal, we might not be able to decipher it or learn a great deal. But if we detect two different, competing beacons from the same planet, or planetary system, then we will know something about them that we couldn’t know from just one unintelligible signal, which is that they are not necessarily a monolithic culture, and that their society may contain some dissonance, and this may influence how, and if, we respond to their messages.

For me, the name of the game is information. Learn as much about them as we can before we embark on making contact, because the more we know, then the less likely we are to be surprised, or to make a misunderstanding that could be catastrophic.

  • Paul Gilster

Just so. But there, you see, is the reason why I think we have to be a lot more judicious about METI. It’s just conceivable that, to them as well as us, content matters.

But look, I see you’re headed in a direction I wanted to go. If information is the name of the game, then information theory is going to play a mighty role in our investigations. So it’s no surprise that you dwell on the matter in The Contact Paradox. Here we’re in the domain of Claude Shannon at Bell Laboratories in the 1940s, but of course signal content analysis applies across the whole spectrum of information transmittal. Shannon entropy measures disorder in information, which is a way of saying that it lets us analyze communications quantitatively.

Do you know Stephen Baxter’s story “Turing’s Apple?” Here a brief signal is detected by a station on the far side of the Moon, no more than a second-long pulse that repeats roughly once a year. It comes from a source 6500 light years from Earth, and Baxter delightfully presents it as a ‘Benford beacon,’ after the work Jim and Greg Benford have done on the economics of extraterrestrial signaling and the understanding that instead of a strong, continuous signal, we’re more likely to find something more like a lighthouse that sweeps its beam around the galaxy, in this case on the galactic plane where the bulk of the stars are to be found.

Baxter’s story sees the SETI detection as a confirmation rather than a shock, a point I’m glad to see emerging, since I think the idea of extraterrestrial intelligence is widely understood. No great revolution in thought follows, but rather a deepening acceptance of the fact that we’re not alone.

Anyway, in the story, the signal is investigated, six pulses being gathered over six years, with the discovery that this ETI uses something like wavelength division multiplexing, dividing the signal into sections packed with data. Scientists turn to Zipf graphing to tackle the problem of interpretation – as you present this in your book, Keith, this means breaking the message into components and going to work on the relative frequency of appearance of these components. From this they deduce that the signal is packed with information, but what are its elements?

Shannon entropy analysis looks for the relationships between signal elements, so how likely is it that a particular element will follow another particular element? Entropy levels can be deduced – how likely are not just pairs of elements to appear, but triples of elements? In English, for example, how likely is it that we might find a G following an I and an N? Dolphin languages get as high as fourth-order entropy by this analysis, as you know. Humans get up to eighth or ninth. Baxter’s signal analysts come up with a Shannon entropy in the range of 30 for ETI.

Let me quote this bit, because I love the idea:

“The entropy level breaks our assessment routines… It is information, but much more complex than any human language. It might be like English sentences with a fantastically convoluted structure – triple or quadruple negatives, overlapping clauses, tense changes… Or triple entendres, or quadruples.”

We’re in challenging territory here. In the story, ETI is a lot smarter than us, based on Shannon entropy. The presence of this kind of complexity in a signal, in Baxter’s scenario, is evidence that the detected message could not have been meant for us, because if it were, the broadcasting civilization would have ‘dumbed it down’ to make it accessible. Instead, humanity has found a signal that demonstrates the yawning gap between humanity and a culture that may be millions of years old. If we find something like this, it’s likely we would never be able to figure it out.

Would something like this be a message, or perhaps a program? If we did decode it, what would it mean? An ever better question: What might it do? Baxter’s story is so ingenious that I don’t want to give away its ending, but suffice it to say that impersonal forces may fall well outside our conventional ideas of ‘friendly’ vs. ‘hostile’ when it comes to bringing meaning to the cosmos.

But let’s wrap back around to Shannon and Zipf, and the SETI Institute’s Laurance Doyle, to whom you talked as you worked on The Contact Paradox. Doyle told you that communication complexity invariably tells us something about the cultural complexity of the beings that sent the message. And I think the great point that he makes is that the best way to approach a possible signal is by studying how communications systems work right here on Earth. Thus Claude Shannon, who started working out his theories during World War II, gets applied to the question of species intelligence (dolphins vs. humans) and now to hypothetical alien signals.

In a broader sense, we’re exploring what intelligence is. Does intelligence mean technology, or are technological societies a subset of all the intelligent but non-tool making cultures out there? SETI specifically targets technology, which may itself be a rarity even in a universe awash with forms of life with high Shannon entropy in communications they make only among themselves.

A great benefit of SETI is that it is teaching us just how much we don’t know. Thus the recent Breakthrough Listen breakdown of their findings, which extends the data analysis to a much wider catalog of stars by a factor of 220, all at various distances and all within the ‘field of view,’ so to speak, of the antennae at Green Bank and Parkes. Still more recent work at the Murchison Widefield Array tackles an even vaster starfield. Still no detections, but we’re getting a sense of what is not there in terms of Arecibo-like signals aimed intentionally at us.

So how do you react to the idea that, in the absence of information to analyze from an actual technological signal, we will always be doing no more than collecting data about a continually frustrating ‘great silence?’ Because SETI can’t ever claim to have proven there is no one there.

  • Keith Cooper

That’s one of my unspoken worries about SETI; how long do we give it before we start to suspect that we’re alone? People might say, well, we’ve been searching for 60 years now – surely that’s long enough? Of course, modern SETI may be 60 years old, but we’ve certainly not accrued 60 years’ worth of detailed SETI searches. We’ve barely scratched the tip of the iceberg bobbing up above the cosmic waters.

So how long until we can safely say we’ve not only seen the tip of the iceberg, but that we’ve also taken a deep dive to the bottom of it as well? Maybe our limited human attention spans will come into play long before then, and we’ll get bored and give up. I think we can also be too quick to assume that there’s no one out there. Take the recent re-analysis of Breakthrough Listen data, which prompted one of the researchers, Bart Wlodarczyk-Sroka of the University of Manchester, to declare:

“We now know that fewer than one in 1600 stars closer than about 330 light years host transmitters just a few times more powerful than the strongest radar we have here on Earth. Inhabited worlds with much more powerful transmitters than we can currently produce must be rarer still.”

Except that we don’t know that at all. All we can say was that there was no one transmitting a radio signal during the brief time that Breakthrough was listening. We could have easily missed a Benford Beacon, for instance. It’s a problem of expectation versus reality – we expect these powerful, omnipresent beacons, and when we don’t find them we jump to the conclusion that ET must not exist, rather than the possibility that our expectation is flawed.

The Encyclopedia Galactic is a similar kind of expectation that isn’t just a fanciful notion, but is a concept that actively influences SETI – we expect ET to be blasting out this guide to the cosmos, so we tailor SETI to look for that kind of signal, rather than something like a Benford Beacon. It also biases our thinking as to what we might gain from first contact – all this knowledge given to us by peaceful, selflessly altruistic beings. It would be lovely if true, but I think it’s dangerous to expect it.

Case in point: Brian McConnell recently wrote on Centauri Dreams about his concept for an Interstellar Communication Relay – basically a way of disseminating the data detected within a received signal, giving everybody the chance to try and decipher it [see What If SETI Finds Something, Then What?]. He rightly points out that we need to start thinking about what happens after we detect a signal, and the relay is a nifty way of organising that, so that should we detect a signal tomorrow, we will already have procedures in hand.

I won’t comment too much on the technical aspects, other than to say that if a message contains a Shannon entropy of 30, then it probably won’t matter how many people try and make sense of the message, we won’t get close (A.I., on the other hand, may have a bit more luck).

The Interstellar Communication Relay is an effort to democratize SETI. My cynical side worries, however, about safeguards. The relay relies on people acting in good faith, and not concealing or misusing any information gleaned from a signal. McConnell proposes a ‘copyleft license’, a bit like a creative commons license, that will put the data in the public domain while preventing people commercialising it for their own gain. I can see how this makes sense in the Encyclopedia Galactica paradigm – McConnell refers to entrepreneurs being allowed to make “games and educational software” from what we may learn from the alien signal.

I worry about this. In The Contact Paradox, I wrote about how even something as innocent as the tulip, when introduced into seventeenth-century Dutch society, proved disruptive (https://en.wikipedia.org/wiki/Tulip_mania). The Internet, motor cars, nuclear power – they’ve all been disruptive, sometimes positively, other times negatively.

How do we manage the disruptive consequences of information from an extraterrestrial signal? Even if ET has the best of intentions for us, they can’t foresee what the effects will be when facets of their culture or technology are introduced into human society, in which case the expectation that ET will be wise and ‘altruistic’ is almost irrelevant. Heaven forbid they send us technology that could be turned into a weapon, and we can’t guarantee that bad actors – after being freely given that information – won’t run off with it and use it for their own nefarious ends. A copyleft license surely isn’t going to put them off.

My feeling is that fully deciphering a signal will take a long, long time, if ever, in which case we shouldn’t worry quite so much. But suppose we are able to decipher it quickly, and it’s more than just a simple ‘greetings’. Yes, we have to think about what happens after we detect a signal, but it’s not just the mechanics of processing that data that we have to think about; we also have to plan how we manage the dissemination of potentially disruptive information into society in a safe way. It’s a dilemma that the whole of SETI should be grappling with I think, and nobody – certainly not me – has yet come up with a solution. But, I think that revising our assumptions, recasting our expectations, and casting aside the idea that ET will be selflessly altruistic and wise, would be a good start.

  • Paul Gilster

Well said. As I look back through our exchanges, I see I didn’t get around to the Deep Time concept I wanted to explore, but maybe we can talk about that in our next dialogue, given your interest in the Cosmic Microwave Background, which is the very boundary of Deep Time. Let’s plan on discussing how ideas of time and space have, in relatively short order, gone from a small, Earth-centered universe defined in mere thousands of years to today’s awareness of a cosmos beyond measure that undergoes continuous accelerated expansion. All Fermi solutions emerge within this sense of the infinite and challenge previous human perspectives.

tzf_img_post
{ 65 comments }

Odds and Ends on the Clouds of Venus

James Gunn may have been the first science fiction author to anticipate the ‘new Venus,’ i.e., the one we later discovered thanks to observations and Soviet landings on the planet that revealed what its surface was really like. His 1955 tale “The Naked Sky” described “unbearable pressures and burning temperatures” when it ran in Startling Stories for the fall of that year. Gunn was guessing, but we soon learned Venus really did live up to that depiction.

I think Larry Niven came up with the best title among SF stories set on the Venus we found in our data. “Becalmed in Hell” is a 1965 tale in Niven’s ‘Known Space’ sequence that deals with clouds of carbon dioxide, hydrochloric and hydrofluoric acids. No more a tropical paradise, this Venus was a serious do-over of Venus as a story environment, and the more we learned about the planet, the worse the scenario got.

But when it comes to life in the Venusian clouds — human, no less — I always think of Geoffray Landis, not only because of his wonderful novella “The Sultan of the Clouds,” but also because of his earlier work on how the planet might be terraformed, and what might be possible within its atmosphere. For a taste of his ideas on terraforming, a formidable task to say the least, see his “Terraforming Venus: A Challenging Project for Future Colonization,” from the AIAA SPACE 2011 Conference & Exposition, available here. But really, read “The Sultan of the Clouds,” where human cities float atop the maelstrom:

“A hundred and fifty million square kilometers of clouds, a billion cubic kilometers of clouds. In the ocean of clouds the floating cities of Venus are not limited, like terrestrial cities, to two dimensions only, but can float up and down at the whim of the city masters, higher into the bright cold sunlight, downward to the edges of the hot murky depths… The barque sailed over cloud-cathedrals and over cloud-mountains, edges recomplicated with cauliflower fractals. We sailed past lairs filled with cloud-monsters a kilometer tall, with arched necks of cloud stretching forward, threatening and blustering with cloud-teeth, cloud-muscled bodies with clawed feet of flickering lightning.”

Published originally in Asimov’s (September 2010) and reprinted in the Dozois Year’s Best Science Fiction: Twenty-Eighth Annual Collection, the story depicts a vast human presence in aerostats floating at the temperate levels. Landis has explored a variety of Venus exploration technologies including balloons, aircraft and land devices, all of which might eventually be used in building a Venusian infrastructure that would support humans.

We’ve already seen that Carl Sagan had written about possible life in the Venusian atmosphere, and an even more ambitious Paul Burch considered using huge mirrors in space to deflect sunlight, generate power, and cool down the planet. Closer to our time, an internal NASA study called HAVOC, a High Altitude Venus Operational Concept based on balloons, was active, though my understanding is that the project, in the hands of Dale Arney and Chris Jones at NASA Langley, has been abandoned. Maybe the phosphine news will give it impetus for renewal. The Landis aerostats would be far larger, of course, carrying huge populations. I have to wonder what ideas might emerge or be reexamined given the recent developments.

Image: Artist’s rendering of a NASA crewed floating outpost on Venus

With Venus so suddenly in the news, I see that Breakthrough Initiatives has moved swiftly to fund a research study looking into the possibility of primitive life in the Venusian clouds. The funding goes to Sara Seager (MIT) and a group that includes Janusz Petkowski (MIT), Chris Carr (Georgia Tech), Bethany Ehlmann (Caltech), David Grinspoon (Planetary Science Institute) and Pete Klupar (Breakthrough Initiatives). The group will go to work with the phosphine findings definitely in mind. Pete Worden is executive director of Breakthrough Initiatives:

“The discovery of phosphine is an exciting development. We have what could be a biosignature, and a plausible story about how it got there. The next step is to do the basic science needed to thoroughly investigate the evidence and consider how best to confirm and expand on the possibility of life.”

Phosphine has been detected elsewhere in the Solar System in the atmospheres of Jupiter and Saturn, with formation deep below the cloud tops and later transport to the upper atmosphere by the strong circulation on those worlds. Given the rocky nature of Venus, we’re presumably looking at far different chemistry as we try to sort out what the ALMA and JCMT findings portend, with exotic and hitherto natural processes still possible. On that matter, I’ll quote Hideo Sagawa (Kyoto Sangyo University, Japan), who was a member of the science team led by Jane Greaves that produced the recent paper:

“Although we concluded that known chemical processes cannot produce enough phosphine, there remains the possibility that some hitherto unknown abiotic process exists on Venus. We have a lot of homework to do before reaching an exotic conclusion, including re-observation of Venus to verify the present result itself.”

Image: ALMA image of Venus, superimposed with spectra of phosphine observed with ALMA (in white) and JCMT (in grey). As molecules of phosphine float in the high clouds of Venus, they absorb some of the millimeter waves that are produced at lower altitudes. When observing the planet in the millimeter wavelength range, astronomers can pick up this phosphine absorption signature in their data as a dip in the light from the planet. Credit: ALMA (ESO/NAOJ/NRAO), Greaves et al. & JCMT (East Asian Observatory).

I’ll close with the interesting note that the BepiColombo mission, carrying the Mercury Planetary Orbiter (MPO) and Mio (Mercury Magnetospheric Orbiter, MMO), will be using Venus flybys to brake for destination, one on October 15, the other next year on August 10. It has yet to be determined whether the onboard MERTIS (MErcury Radiometer and Thermal Infrared Spectrometer) could detect phosphine at the distance of the first flyby — about 10,000 kilometers — but the second is to close to 550 kilometers, a far more promising prospect. You never know when a spacecraft asset is going to suddenly find a secondary purpose.

Image: A sequence taken by one of the MCAM selfie cameras on board of the European-Japanese Mercury mission BepiColombo as the spacecraft zoomed past the planet during its first and only Earth flyby. Images in the sequence were taken in intervals of a few minutes from 03:03 UTC until 04:15 UTC on 10 April 2020, shortly before the closest approach. The distance to Earth diminished from around 26,700 km to 12,800 km during the time the sequence was captured. In these images, Earth appears in the upper right corner, behind the spacecraft structure and its magnetometer boom, and moves slowly towards the upper left of the image, where the medium-gain antenna is also visible. Credit: ESA/BepiColombo/MTM, CC BY-SA IGO 3.0.

And keep your eye on the possibility of a Venus mission from Rocket Lab, a privately owned aerospace manufacturer and launch service, which could involve a Venus atmospheric entry probe using its Electron rocket and Photon spacecraft platform. According to this lengthy article in Spaceflight Now, Rocket Lab founder Peter Beck has already been talking with MIT’s Sara Seager about the possibility. Launch could be as early as 2023, a prospect we’ll obviously follow with interest.

A final interesting reference re life in the clouds, one I haven’t had time to get to yet, is Limaye et al., “Venus’ Spectral Signatures and the Potential for Life in the Clouds,” Astrobiology Vol. 18, No. 9 (2 September 2018). Full text.

tzf_img_post
{ 46 comments }