Opposite Day

Opposite-colored pigeons

It’s Opposite Day, or National Opposite Day, if you’re in the United States. Which means you probably can’t believe anything you hear or read. Assume, for example, that anything tweeted today by members of the Executive Branch of the United States is meant to mislead you—all in the name of good fun, of course! Most likely the opposite is true.

On Opposite Day, it’s important to remember the difference between an argument and a contradiction. But if you should happen to be enslaved by a group of androids, you can use Opposite Day to your advantage by overloading their logic circuits. There’s an easy way to do this, and there’s a hard way.

Image credit: Pixabay


Go to Source
Author: Joe Kissell

Leap Seconds

Plot showing the difference UT1−UTC in seconds. Vertical segments correspond to leap seconds. Red part of graph was prediction (future values) at the time the file was made.

Time keeps on slippin’

There’s no easy way to say this, so I’m just going to put it out there: the Earth rotates at the wrong speed. Or, rather, it usually does. You may think it takes 24 hours to go around once, but sometimes it takes about 2 milliseconds more. To be fair, the planet’s speed has been reasonably consistent for most of the past 20 years, but for many years prior to that, it was consistently behind. Scientists believe that in the future, it will revert to its old ways, and probably even get worse. Although 2 milliseconds a day doesn’t sound like much, it adds up to about a second every year and a half. Which means that, eventually, the world’s most accurate clocks get noticeably out of sync with the observed rotation of the planet. The solution, for several decades, has been to add “leap seconds” as necessary to our official time standards so that they match what the planet is doing—although this procedure has some problems of its own.

In a way, this is all a question of how you define words like “second” and “wrong.” When it comes to timekeeping, things are seldom as they appear.

Just a Second…

For starters, how long is a second? Obviously, it’s 1/60 of a minute, which is 1/60 of an hour, which is in turn 1/24 of a day—in other words, a second is 1/86,400 of a day. Or is it? Well, it used to be. One can determine when exactly one day has elapsed by looking at the sky, but this means that all the smaller time segments could be known only after some division; they didn’t have any real meaning on their own. And the precise astronomical observations needed to determine just when a day has elapsed were rather inconvenient for most scientists, let alone the rest of us. More importantly, astronomers have known since the late 18th century that the speed of the Earth’s rotation is not constant—so calculating seconds, minutes, and hours strictly as fractions of a day means that each of those units could have a slightly different value every day.

However, the development of atomic clocks in the 1940s and 1950s changed all that. Atomic clocks work by counting the vibrations of certain atoms, which are invariant, theoretically, forever. So in 1967, the International System of Units (SI) defined the second as 9,192,631,770 vibrations of the Cesium-133 atom. Interestingly, they did not arrive at that figure by counting the number of vibrations in 1/86,400 of a day. Instead, they based it on something called ephemeris time, in which hours, minutes, and seconds are derived not from the rotation of the Earth, but from its revolution around the sun. That speed, too, varies; scientists chose the length of the ephemeris second in 1900 as their arbitrary standard. As later research revealed, the last time a day (by ordinary, solar reckoning) was exactly 86,400 SI seconds long was in 1820. So the length of a second, as measured by the Earth’s rotation, suddenly became “wrong” according to the new definition of second.

No Time Like the Present

We did, however, finally have a nice, consistent standard that was readily measurable on its own terms without astronomical observations. But the Earth apparently didn’t get the message that it was supposed to conform to this new standard, and its rotation kept slowing, ever so slightly, with each passing year. In 1972, Coordinated Universal Time (which, for complicated reasons, goes by the initials UTC rather than CUT) was adopted as the new international standard, based on measurements from atomic clocks. (An aside: even atomic clocks don’t always agree with each other; the official universal standard is based on the average times from about 250 atomic clocks.) But astronomers still needed a timekeeping system that matched what they observed—irregularities and all.

To deal with the mismatch between UTC and astronomical time, the scientists charged with maintaining UTC decided that whenever that difference approached ±0.9 seconds, a “leap second” would be added to or subtracted from UTC as a correction—in other words, an occasional 61- or 59-second minute. Between 1972 and 1998, 22 such seconds were added. But then, for reasons that are not entirely clear, the planet decided to stick to an 86,400-second-a-day rotation for a while, and no leap seconds were needed from 1999 through 2004. From 2005 through 2018, only five leap seconds have been added—so, about one every three years on average. But historical data shows that the Earth has a habit of changing its rotational speed quite frequently, and the trend over many centuries has clearly been a gradual slowing. So most experts believe that the need for leap seconds will never entirely disappear.

Take a Leap

On the other hand, quite a few people are fed up with the whole notion of leap seconds for reasons both philosophical and practical. For one thing, a lot of the world’s clocks and computer systems were not designed to handle leap seconds elegantly. This is not usually a big deal, but sometimes, the difference of a second means everything—in financial transactions, for example, where the prices of stock, currency, or whatnot can change instantly. More importantly, if the planet’s rotation continues to slow at its historical rate, leap seconds will eventually be needed more and more often. In as little as a few years, the increased disparity between UTC and other timekeeping standards could cause serious problems, including potential failure of the GPS system and other navigational tools. And on a much longer time scale—say, 50,000 years—the Earth’s rotation could take 86,401 SI seconds each day, meaning we’d need to add a leap second every single day, or else redefine “second” to support the facts.

So one proposal currently being considered by the world’s standards committees is simply to stick with UTC but abandon the use of leap seconds altogether. That sounds easy enough, but there are some drawbacks. Doing so would make astronomers’ work harder and require that they invest a lot of money in upgrading their equipment. As for the rest of us, we’d simply live with the small difference between astronomical time and atomic time until it accumulated to 60 minutes—at which point we would simply add a “leap hour,” just as most of us do once a year at the end of Daylight Saving Time. The first such hour wouldn’t happen for more than four centuries, at which point it would be someone else’s problem to worry about.

Until the world’s timekeeping experts get this all sorted out—which may be never—the International Earth Rotation and Reference Systems Service carefully measures the speed of the Earth’s rotation, issuing periodic bulletins as to whether we need to add another leap second at the end of the following June or December. If and when the next leap second occurs, you probably won’t notice. But many millennia in the future, your descendants may finally get 25-hour days.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on April 29, 2005.

Image credit: Tomia/Gordon P. Hemsley/RP88 [Public domain], via Wikimedia Commons


Go to Source
Author: Joe Kissell

National Peanut Butter Day

Peanut butter in a jar

Although people have been roasting and grinding up peanuts for millennia, it wasn’t until the late 19th century that Marcus Gilmore Edson invented a way to make a paste by milling roasted peanuts. Various other inventors made substantial contributions to the development of peanut butter over the following few decades. In 1922, one Joseph Rosefield figured out how to homogenize peanut butter so that the oil didn’t separate, and his process led to the commercial success of such brands as Peter Pan (my personal favorite to this day) and Skippy. Jif didn’t come along until 1958, and don’t even get me started on GIF. I understand that peanut butter fans can be quite particular about their preferred formulation (homogenized or not, sweetened or not, smooth or chunky, etc.). Today, enjoy peanut butter in whatever form makes you happiest!

Image credit: PiccoloNamek at English Wikipedia [CC BY-SA 3.0]


Go to Source
Author: Joe Kissell

Beurre Salé

A container of Societe France caramels au beurre sale

The savory treat from Brittany

For the health-conscious, salt and butter are very high on the list of dietary no-no’s, although in recent years butter’s reputation has slightly improved (while margarine has become less well-thought of). And as Mark Kurlansky detailed in his fascinating book Salt: A World History, salt has played an important role in human society and is even necessary (in a certain amount) to the healthy functioning of our bodies.

But just because we crave something doesn’t mean it’s healthy to have it in copious quantities, no matter how tempting it seems. Years ago, I ran across a blog post by professional chef and author David Lebovitz that made me aware of the tastiest combination of this “evil” duo: creamy butter laced and studded with large crystals of salt. This tempting concoction was beurre salé (literally, “salted butter”), a regional specialty from the Brittany region of France, and the sight of it made me want to eat copious quantities of it, health concerns notwithstanding.

Worth Its Salt

So what makes this salted butter different from the kind you can buy at the local supermarket? For one thing, it’s made with Brittany sea salt, some of the finest produced anywhere. Brittany is located in the northwest corner of France (south of Normandy) and its lengthy ocean coastline is a perfect place for cultivating salt. Its most famous type of salt, fleur de sel, comes from the town of Guérande (which was historically part of Brittany, but is now part of the Pays de Loire region), and is world-renowned for its texture and flavor.

Having such ready access to salt, Breton cuisine developed to take advantage of this situation. Whereas in most other regions of France there are dozens, if not hundreds, of types of cheese specific to that region, there is not even a word for cheese in the Breton language. There are a few cheeses to be found in the region, but less-processed dairy products (butter and cream) are much more prevalent in the Breton cuisine. The reason is that before the advent of refrigeration, making milk into cheese was more effective against spoilage than making butter—that is, unless you had plenty of salt.

Sel Preservation

Salt has historically been used as a preservative; that is a large part of why it has been so coveted throughout human history. In the case of butter, this was especially so, since butter has a tendency to quickly go rancid when it is exposed to air. Refrigeration has taken care of this problem for modern butter-eaters (and the invention of a water-sealed butter dish helped too), but it was a serious problem for our ancestors. According to Margaret Visser’s delightful book Much Depends on Dinner (which includes individual chapters on butter and salt), butter that has been oxidized (exposed to the air) can cause “diarrhoea, poor growth, loss of hair, skin lesions, anorexia, emaciation and intestinal haemorrhages.”

Mixing butter with salt, or storing it in brine, was a way to prevent butter from going rancid, and was commonly done before the days of refrigeration. Indeed, a record from 1305 CE noted that one pound of salt was added to every ten pounds of butter or cheese. To remove some of the salt, people had to rinse the butter by kneading water into it, and then squeezing it out again along with some of the salt.

Butter Batters

No longer a necessity, beurre salé is today a gourmet treat; it is used in many traditional Breton dishes, and is coveted for its delicious effect on everything from fine chocolates to buttery cakes. It may seem counterintuitive, but salt can be as important as sugar in many dessert recipes, and lends an interesting counterpoint to the sweetness.

Traditional Breton desserts made using beurre salé include: gâteaux Breton, a type of poundcake made with flour, butter, sugar, and eggs; palets Bretons, small buttery cakes; and caramel au beurre salé, which can refer to individual candies (salted caramels) or the process of caramelizing sugar and salted butter while baking a dessert, such as Kouign Amann. Amann is the Breton word for butter, and this cake is made with plenty of it, along with yeast, sugar, flour, and water.

Salty Language

Since the time I wrote the first version of this article in 2006, salted caramel (inspired by caramel au beurre salé) has become much more popular in North America; it seems everywhere you look these days the salted caramel flavor is popping up, from popcorn and hot chocolate, to cookies and coffee drinks. And since this article was first written, I have personally tried beurre salé in its birthplace, on several trips to Brittany, and have loved every bite, whether slathered on a baguette or baked in a Kouign Amann. And while living in Paris in the late 2000s, I also had the pleasure of meeting and befriending David Lebovitz and Margaret Visser, both originally cited in this article. It was an unexpected delight, much like finding crystals of briny sea salt in the midst of delicious and creamy butter.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on July 31, 2006.

Image credit: Photozou.jp [CC BY]


Go to Source
Author: Morgen Jahnke

Library Shelfie Day

A man looking at a library shelf

It’s one thing to like libraries, or even to say you like libraries. Today’s observation demands proof: pix or it didn’t happen. Starting in 2014, the New York Public Library declared the fourth Wednesday in January to be Library Shelfie Day. So let’s break this down: a shelfie is like a selfie, except it’s in front of a shelf. A library shelfie is a shelfie taken in a library. So grab your smartphone and head on over to the nearest library today to participate. While you’re at it, pick up a few good books!

Image credit: Pixabay


Go to Source
Author: Joe Kissell

Fernet-Branca

Bottles of Fernet-Branca

Italy’s mystery liqueur

While some companies are completely transparent about the ingredients in their products, hoping to snag customers looking for the healthiest option, in some cases the secret of a product’s makeup is not only closely guarded, but promoted as a key part of its allure. Mysteries can be a great advertising gimmick.

The proprietors of Antoine’s restaurant in New Orleans were clearly operating from this idea when they created their famous recipe for Oysters Rockefeller; although it has been widely speculated upon, this recipe has remained a secret since it was first developed in 1899. Having sampled Oysters Rockefeller at Antoine’s, I would say that I greatly enjoyed their taste, but I got more enjoyment out of trying to guess the elements of the recipe.

This same type of marketing is at work in the promotion of Dr Pepper soda. The only information given by the manufacturer is that it contains 23 flavors; it’s up to customers to draw their own conclusions about what those flavors are. Ditto for Coca Cola’s secret recipe and the “11 herbs and spices” in Kentucky Fried Chicken. In a similar vein, the makers of one of Italy’s best-known liqueurs, Fernet-Branca, prefer to keep the composition of their product top secret, but rumors about what it may contain are certainly tantalizing.

Saffron So Good

Fernet-Branca is a type of bitters, a spirit made from different herbs, plants, and roots that supposedly aids digestion and stimulates the appetite. Other types of bitters include Campari, Angostura bitters, and orange bitters. While the complete list of 40 herbs and spices that go into Fernet-Branca has never been made public, some of its ingredients are common knowledge, and include myrrh, chamomile, cardamom, aloe, and saffron, as well as its base component of grape alcohol.

Saffron in particular seems to be an important ingredient; this rare spice, harvested from the saffron crocus flower, is the world’s most expensive spice by weight. According to an article about Fernet-Branca that appeared in a San Francisco newspaper, the company that produces Fernet-Branca, Fratelli Branca, is the largest consumer of saffron in the world, claiming 75% of worldwide output.

As far as the other ingredients are concerned, there is wide speculation about what these may be. A few of the rumored mystery elements include: rhubarb, cinchona bark from South America (known for its anti-malarial properties), gentian root (a powerful medicinal herb), wormwood (used in absinthe), bay leaves, sage, peppermint oil, and the ginger-like spices galanga and zedoary.

Medicinal Compound

With all these medicinal ingredients, it is not surprising that Fernet-Branca was first developed as a health elixir. The creator of the formula, Bernardino Branca, was a self-taught apothecary in Milan, who first offered Fernet-Branca to the public in 1845. Marketing his product as a tonic to cure many kinds of illness, Branca even persuaded the director of a local hospital of its curative benefits.

Even today, Fernet-Branca is known for its ability to calm upset stomachs and soothe hangover misery. If earlier marketing pitches for the spirit are to be believed, it can also cure cholera and ease menstrual cramp pain. The health-enhancing nature of Fernet-Branca proved handy during Prohibition in the United States. Since it was considered a medicinal product, pharmacies could import and sell Fernet-Branca without interference from the government.

Where Everybody Knows Its Name

Although Fernet-Branca is made by an Italian company, Italy is not the largest consumer of this liqueur. In fact, there are two other places in the world known for their prodigious consumption of the bitter quaff. These two places are San Francisco, California, and the country of Argentina.

San Francisco is the biggest consumer of Fernet-Branca in the United States, and has the highest per capita consumption of it in the world. Its popularity in the city may be partially attributed to San Francisco’s large Italian-American community, centered around the commercial district of North Beach. Whatever the reason, San Franciscans drink Fernet-Branca in large quantities, usually followed by a chaser of ginger ale.

Argentina also has a large Italian population and a similar thirst for Fernet-Branca. There is even a popular song that celebrates the joys of “Fernet Con Coca,” or Fernet mixed with cola, the usual way it is prepared in Argentina. In fact, the only other distillery of Fernet-Branca located outside Milan is in Argentina.

Above and beyond its regional popularity, Fernet-Branca has made a mark on pop culture as well. When mention of Fernet-Branca comes up, fans of Christopher Nolan’s film The Dark Knight Rises will no doubt be put in mind of Alfred’s memorable references to the drink in the film. On the literary side, James Hamilton-Paterson’s novel Cooking with Fernet Branca, a humorous look at life in Tuscany, was long- listed for the prestigious Booker Prize.

A Matter of Taste

After living in San Francisco for quite some time and feeling ashamed that I had never enjoyed this quintessential San Francisco experience, years ago I tried my first shot of Fernet-Branca. Unsure of what to expect, and slightly put off by the strong pine scent I registered, I closed my eyes and gulped it down.

The intense menthol-like sensation caused me to cough, and I didn’t enjoy the bitter aftertaste the drink created, but soon after finishing it, I began to feel a bit better. I can’t say whether or not I gained any health benefits from drinking the Fernet-Branca, but the next time I experience an upset stomach I will have to try another shot of it, for purely medicinal purposes of course.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on October 16, 2006.

Image credit: Jesús Dehesa [CC BY-ND 2.0], via Flickr


Go to Source
Author: Morgen Jahnke

National Hot Sauce Day

Marie Sharp's Hot Sauces

Whether you opt for the quite mild Tabasco sauce, the “hey-I’m-a-trendy-hipster” Sriracha, or something considerably higher on the Scoville scale, today—National Hot Sauce Day—is the day to add a liquid capsaicin solution to your favorite foods. Pro tip: plain white rice and white milk are much, much better for soothing that “Holy frak my hair is on fire!” feeling than water. (In sufficient quantities, tequila is also reputed to work, although the side effects might not be to your liking.)

Image credit: Kaldari [CC0], from Wikimedia Commons


Go to Source
Author: Joe Kissell

Pennsylvania Coal Fires

Cracked highway from subsurface coal fire

Heat under the street

There are a bunch of little facts that I sort of half-learned in elementary school, and only those that came with terrific mnemonics have managed to stick. I remember the terms “Dromedary” and “Bactrian,” for example, because a D has one hump (like a one-hump Dromedary camel) and a B has two (like a two-hump Bactrian camel). But I never acquired a similar method for remembering cloud types—cirrus, cumulus, nimbus—I know the names but I forget which is which. And then there’s coal. I vividly recall learning about anthracite, bituminous, and lignite coal as a child in Pennsylvania, a state legendary for its coal production. But which type had which properties or uses? It’s all a blur now. Since I did not pursue an education or profession in which this knowledge was needed, my brain apparently decided to delete those records to make space for really important information, such as Star Trek trivia.

I do remember, though, that when I was quite young my father took me to a coal mine that offered tours to the public. I thought it was absolutely the coolest thing ever. Getting to ride in that train down into the dark tunnels, seeing all that amazing machinery, and imagining the life of a miner was exciting and mysterious. I’ve always had a fondness for caverns and tunnels—maybe that’s where it all started.

As an adult living in California, I rarely think about coal mines. I do, however, think about wildfires and forest fires, especially in the dry months of late summer. Everyone understands that these things just happen—due sometimes to natural causes, sometimes human causes (accidental or intentional). And when they occur, vast firefighting resources are unleashed to contain the fires in order to minimize the risk to homes and businesses. After all, they pose an imminent threat, plain for all to see (and smell). Of course they have to be stopped.

But Pennsylvania has the distinction of being home to the largest number of underground coal fires in the United States. And further, that some of these fires have been burning continuously for upwards of 40 years; that they’ve obliterated entire towns; that they vent an unimaginable amount of carbon dioxide and other gases into the already overburdened atmosphere; and that, for the most part, very little is being done about them. All these facts astonish and disturb me, but none more than the very possibility of the fires’ existence. How can a fire rage underground for decades or even centuries? The answer: very easily.

Fire in the Hole

Picture an abandoned coal mine—there are thousands of them in Pennsylvania. Although much of the coal has been removed, plenty still remains—perhaps just not in a configuration that’s easily extractable. Miles of tunnels, their ceilings shored up with columns of unexcavated coal, lie empty. Though the entrance to the mine may have been sealed, that seal was by no means complete or airtight. And suppose some of the coal lies very close to the surface—or is even visible in an exposed seam. Now something happens to ignite the coal. It may be a natural cause—lightning, for instance, or even spontaneous combustion given the right conditions. Or maybe a forest fire, or someone burning garbage.

Once the coal begins burning, it feeds off the air in the tunnels and the ventilation shafts that were used to supply air to the miners. Still more air seeps through natural cracks in the rock. Coal burns easily, requiring only a tiny amount of oxygen—and with millions of tons of fuel handy, it soon spreads beyond the existing tunnels and into the thick strata of coal that lie under immense tracts of land. When enough of the coal burns through, the ground above it collapses—an effect known as subsidence. The newly formed cracks or pits allow more air in, accelerating the fire’s spread. Meanwhile, carbon dioxide, smoke, and steam escape, killing plants and making the area’s air unsafe for humans and wildlife.

Our State Insect: The Firefly (no kidding)

No one can say for sure how many such fires currently rage in Pennsylvania, but the number is unquestionably in the dozens. The number is hard to pin down because coal fires that seem to be out can smolder at low temperatures for years and then flare up again; the process of checking to see whether they’re still going carries with it the risk of making matters worse by adding more air.

The largest and most infamous of Pennsylvania’s coal fires is under the town of Centralia. It started in 1962, apparently due to someone burning garbage in the town dump. For decades, a combination of bureaucratic delays, funding shortages, and ineffective containment efforts permitted the fire to grow to the point that the entire town (formerly home to 1,100 people) was condemned and basically shut down. A handful of residents remain, despite repeated government orders to evacuate. They enjoy peace and quiet for the most part, but worry about the ongoing threats of subsidence, toxic fumes, and careless tourists injuring themselves.

Down and Out

Underground coal fires are notoriously difficult to extinguish. If it were a simple matter of pumping water (or some other substance) into the old mine tunnels to suffocate the fire, they would have been out long ago. Part of the problem is simply getting to the spots that are on fire; another part is pushing out all the oxygen, given the porous nature of the coal and the rock in which it’s embedded. And then there’s the scale: the volume of underground space affected by the fire is immense (and growing all the time). Conservative estimates put the cost of containing (not extinguishing) the Centralia fire alone at well over half a billion dollars. And, of course, that’s just one fire—there’s always another. Since that sort of money is nowhere to be found, officials throw up their hands and say, “We’ll just let it burn out.” How long will that take? Experts think there’s enough coal to keep it going for another 250 years.

Pennsylvania is by no means the only place with unquenchable underground coal fires. Similar fires burn in other parts of the United States, as well as China, India, Indonesia, and elsewhere around the world. Under Australia’s Burning Mountain Nature Preserve is a coal fire that has been burning for at least 2,000 years, and possibly as long as 5,500 years. In all, there may be hundreds of thousands of active coal fires, and only in rare cases are any serious efforts being made to stop them.

By some estimates, coal fires are a bigger contributor to global warming than cars—a truly staggering thought. Although fighting them is difficult and expensive, very little money has been spent looking for technological solutions. And one of the biggest reasons is simply that the fires are, for the most part, invisible. While a California wildfire may be an obvious threat requiring immediate action, it’s hard to convince governments to put money into solving a problem that can’t be seen—especially when it’s relatively cheap simply to relocate residents and put up fences and warning signs.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on October 23, 2004.

Image credit: James St. John [CC BY 2.0], via Flickr


Go to Source
Author: Joe Kissell

National New England Clam Chowder Day

A bowl of New England clam chowder

As everyone knows, a chowder is a thick, cream-or milk-based soup, often containing potatoes along with other vegetables and, usually, some type of seafood. I’ve made some delicious corn chowder, but clam chowder has always been my favorite. Except…one time I ordered the thing called “clam chowder” and got a bowl of red liquid with tomato chunks, some vegetables and a few pieces of clam. That was what people call “Manhattan clam chowder” which is right up there with Welsh rabbit in the list of foods whose names contradict their ingredients (or vice versa). Although I personally find “Manhattan clam chowder” inedible, anyone who wants to consume that particular combination of foodstuffs has my blessing—just don’t call it “chowder,” for crying out loud. Today, lovers of real clam chowder—that is, New England clam chowder for those who want to be sure they don’t get the wrong thing, or just “clam chowder” to any reasonable citizen—get to celebrate their rightness.

Image credit: Jon Sullivan [Public domain], via Wikimedia Commons


Go to Source
Author: Joe Kissell

Legends of Tierra del Fuego

Satellite image of Tierra del Fuego

The incredible shrinking southern continent

As an American, I’ve always been accustomed to clearly defined state, national, and continental boundaries. The border between Canada and the United States, for example, may be an arbitrary line of latitude, but we all know exactly where it is—what’s in, and what’s out. We know exactly where North America stops and Central America starts; we also know when we’ve reached the easternmost or westernmost edge of the continent because we run into an ocean. Sure, there’s the odd island off the coast here or there, but conceptually, these cause no problems for my notion of what a continent is.

The map of South America, though, has always offended my sense of geographical tidiness. At the southern end of the continent, the land sort of swoops out to the east—but wait, that last big chunk is actually an island. Is that part of the continent? And what about the bazillions of smaller islands littering the coastline to the south and west? If I’m on one of those islands, am I on the continent or not? The geological answer is yes—I’m on the same continental plate. The political answer is also yes—any given spot of any given island is uncontroversially under the control of either Chile or Argentina. But to the average person on the street (or boat, as the case may be), these boundaries are neither visible nor intuitive. Today, we can get the answers to such questions from highly accurate maps. Hundreds of years ago, though, the answers were far less obvious. Speculation about continental boundaries led to some fanciful maps, tall tales, and grand adventures.

What Goes Around

I was standing in a museum in a town on Tierra del Fuego—a name given to the entire archipelago of little islands at the tip of South America as well as to its largest island, which is known more properly as Isla Grande de Tierra del Fuego. (For other articles I’ve written here about Tierra del Fuego, see Ushuaia, Extinction of the Yámana, and Pan del Indio.) On the wall was a map from the 16th century showing the landmass we know today as South America extending all the way south to connect with a vast southern continent much larger than Antarctica. In other words: an unbroken stretch of land all the way from pole to pole. This hypothetical continent, which also encompassed Australia, had a detailed imaginary coastline that was represented as being accurate, even though no cartographer had come anywhere near it. Europeans at the time referred to this continent as Terra Incognita (the Unknown Land)—or Terra Australis (the Southern Land). In the 4th century BCE, Aristotle had advanced the idea that a great southern continent must exist, because without it the world would be top-heavy. This view was later expanded on and popularized by Greek geographer Ptolemy in the 2nd century CE. But as of the early 16th century, no European had actually seen this land.

In 1520, Ferdinand Magellan became the first European explorer to discover a sea route from the Atlantic ocean to the waters west of South America, to which he gave the unfortunate name “Pacific ocean.” But as he was passing through what came to be called the Strait of Magellan, with South America clearly on his right, Magellan could also see land to his left. When he realized the channel went all the way through, he drew what was for him the logical conclusion: that land he’d seen to the south must be the tip of the great southern continent. He gave it the name Tierra del Fuego (“land of fire”) upon seeing the smoke rising from numerous fires built by the land’s inhabitants. This name, of course, suited the continent’s popular image as a mysterious and forbidding place. And Magellan’s discovery—apparently the first proof of the existence of Terra Australis—required only minor modifications to the maps of the time.

Down and Out

It was not until 1578 that Francis Drake, in an attempt to circumnavigate the globe, discovered the truth about Tierra del Fuego. Drake sailed through the Strait of Magellan, but his ship was blown south by a storm; he soon found himself rounding the tip of a large island chain. Now there was another way to get between the oceans—the Drake Passage. Although Drake did not sail all the way to Antarctica, he again drew the logical conclusion that it must be down there somewhere—as in fact, by chance, it was. Shortly thereafter, in 1616, Cape Horn—at the tip of Horn Island—was identified as the southernmost point of land that could be construed as part of South America. In the 18th century, captain James Cook discovered the location of Australia—and that it, too, was not the imagined southern continent, its new name notwithstanding. Only in the early 19th century did explorers first set foot on Antarctica and begin to correct the old maps once and for all.

All of the foregoing is, I’m sure, familiar to anyone who (unlike me) actually paid attention in history and geography classes. But it was a revelation to me, looking at an old map in a museum, that assumptions about the nature of the world—unsubstantiated though they were—could have led to such startling errors, such blatant (if well-meaning) fabrications, and so many years during which myths were misrepresented as fact. True or not, deeply ingrained beliefs die hard. If you never thought you could learn anything from history, keep this little lesson in mind as you read today’s news.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on February 22, 2005.

Image credit: NASA [Public Domain]


Go to Source
Author: Joe Kissell