National Oysters Rockefeller Day

Oysters Rockefeller

We’ve covered Oysters Rockefeller previously here on Interesting Thing of the Day—it’s a baked oyster dish with a secret recipe—often imitated but never precisely replicated. If you want the original version, I’m afraid you’ll have to visit Antoine’s in New Orleans—the same restaurant that declared January 10 to be National Oysters Rockefeller Day starting in 2017. Today, however, if you are inclined to make Oysters Rockefeller yourself, or buy them at another restaurant, I hereby authorize you to use any recipe that tastes good. (Offer expires at midnight, sorry.)

Image credit: Larry Hoffman [CC BY-SA 2.0], via Flickr


Go to Source
Author: Joe Kissell

Bahasa Indonesia

Samples of Bahasa Indonesia books

The complex story of a simple language

During college I spent a summer in Indonesia, and naturally I picked up a bit of the language. When I say “the language,” I’m referring to Indonesian or, as it is known in Indonesian, Bahasa Indonesia (“language of Indonesia”). This statement is not as obvious as it may sound; Indonesia is home to hundreds of languages, and of these, Indonesian is not spoken as a first language by the majority of the population. But it is the lingua franca, so it’s useful for citizens and travelers alike. I found Indonesian to be straightforward and easy to learn, free of most of the irregularities and annoyances of the Romance languages.

What I understood at the time was that Indonesian is, for the most part, the same language as Malay (Bahasa Melayu), the national language of neighboring Malaysia. I assumed that there were some differences, but that the main one was simply the name. I had no idea at that time of how either version of the language came into existence. It turns out that there’s a bit of a modern myth about the language’s origin—but the truth is even more interesting.

Artificial Intelligence

While doing some research on an unrelated topic, I stumbled upon a webpage claiming that Indonesian was an artificial language. I’d never heard that before and it piqued my interest, so I dug further. A few minutes of web searches turned up quotes such as the following (identities omitted to protect the guilty):

Bahasa Indonesia is an artificial language made official in 1928. By artificial I mean it was designed by academics rather than evolving naturally as most common languages have.

…Indonesian [is] a very simple Malay-based artificial language, designed by academics, and was the official language for a multiethnic country of over 230 million inhabitants.

…Indonesian is a constructed language made by a Dutch missionary in the 1920s on the basis of synthesizing some local languages.

…[Indonesian] was devised by a Dutch linguist, based on various Malayan and Indonesian varieties…in the 1920s.

The language in Malaysia, Bahasa Malay, is a constructed language, and was designed to be easy to learn, as the various people in Malaysia and Indonesia who were told to form rather large nations after WWII needed a common language.

…every language is artificial—it just depends how many people create it. Bahasa Indonesia is also invented but by a group.

Bahasa Indonesia is essentially a constructed language designed to fool foreigners into thinking Indonesia is a monoculture.

…the other major semi-artificial language of recent times, Bahasa Indonesia, the national language of Indonesia, is a syncretic amalgamation of existing Malay dialects that were still in current use.

Even though it is basically the Malay language, [Indonesian] has in common with Esperanto…the fact of having underwent [sic] a kind of planned restructuration to simplify grammar and reduce exceptions.

With all that evidence, I was nearly convinced, though I wasn’t entirely certain what I was convinced of. This string of claims sounded a bit like the telephone game, where a message changes just a bit with each retelling. Then a little voice in the back of my head whispered, “Primary sources, Grasshopper.” Every fact on the web appears to be equally authoritative, but just because somebody says something with conviction doesn’t mean it’s true. So I went to an actual library (two of them, in fact) and looked at ancient documents known as “books”—some more than fifty years old—to see if I could get to the bottom of this story. After all, if a Dutch linguist (or missionary) did in fact invent the language, I should be able to find that person’s name. And if a committee of academics invented it, I should be able to find some record of that momentous project.

Let me cut to the chase: as with all myths, this one has a kernel of truth to it. But the claim that Indonesian is an “artificial” or “constructed” language is simply false.

This Land Is Your Land, This Land Is Island

Indonesia is an archipelago consisting of over 18,000 islands, of which about a third are inhabited. That these islands—and their greatly varying cultures and languages—should be considered a single nation is a relatively recent (and, ethnographically speaking, artificial) notion. Nevertheless, for centuries, traders sailing from one island to another have needed to communicate with each other. Malay was the local language of Malacca, a port town near the southern tip of the Malaysian peninsula. According to legend, local fisherman in Malacca developed Malay as a synthesis of several nearby languages in the late 16th century. However, written records of Malay date back as far as the 7th century, so it is more likely that the fisherman simply integrated new words into the language. (Such borrowing happens in virtually all languages, and the newly incorporated words are known as “loan words.”) In any event, Malacca was a hot spot for traders, and by the time the Dutch colonized Indonesia (then known as the Dutch East Indies) in the 17th century, Malay had already come into widespread use as the regional trade language.

During their more than three centuries of occupation, the Dutch, unsurprisingly, attempted to enforce the use of their own language for trade. In the process, Malay—as spoken in Dutch territory—picked up a number of Dutch loan words, while the Malaysian speakers of Malay developed a somewhat different vocabulary. Meanwhile, due to the influence of Islam, which had been introduced in Indonesia as far back as the 13th century, Malay also picked up a number of Arabic loan words. Because parts of Indonesia were Hindu, Sanskrit also gave numerous words to Indonesian—including “bahasa” (“language”). And since Portugal traded in Indonesia and for many years controlled East Timor, many Portuguese words also found their way into the language. In short: without question, the Indonesian variety of Malay did indeed borrow heavily from numerous other languages, but this was a natural linguistic evolution. However, there’s still more to the story.

The Language of Change

By the 1920s, public sentiment in Indonesia was turning strongly toward gaining independence from the Netherlands. In October 1928, the Sumpah Pemuda (Pledge of the Youth) proclaimed that in Indonesia, Malay was to be called “Bahasa Indonesia” and considered the national language. However, there being no nation as yet, this was more of a rallying cry than anything else. In 1945, Indonesia declared its independence from the Netherlands and stated in its constitution that Bahasa Indonesia was its official language—though it took four years of fighting before the Dutch acknowledged Indonesia’s right to self-rule. So depending on how you look at it, Indonesian became the official language in 1928, 1945, or 1949—though at that time, only a tiny percentage of the nation’s population spoke Indonesian as a first language.

Following independence, the people of Indonesia rapidly abandoned Dutch (to the extent that they had grudgingly adopted it) and began to embrace their new official tongue. It is now the first language of more than 40 million people, and a second language for over 150 million. Although these numbers are still small given Indonesia’s total population of more than 260 million, they represent astonishingly rapid growth for the language.

In 1972, the governments of Indonesia and Malaysia collaborated on a project to reform and simplify spelling for both versions of the language; this consisted largely of eliminating Dutch spellings in favor of more phonetic Malaysian spellings. Malay and Indonesian have about an 80% overlap in vocabulary and are mutually intelligible; the variations in vocabulary, pronunciation, and usage have been compared to the difference between American English and British English. Where Indonesian retains many Dutch loan words, Malay typically replaces these with words based on English.

I like Indonesian a great deal; it has such an elegant structure that it’s tempting to believe it could only have been made artificially. But in fact it’s as natural as the next language, notwithstanding its exceptional capacity for absorbing foreign vocabulary—and contributing to linguistic mythology.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on September 17, 2004.

Image credit: Laura Pro [CC BY-SA 4.0], from Wikimedia Commons


Go to Source
Author: Joe Kissell

National Word Nerd Day

Entry in a dictionary

Today, January 9, is an important national holiday—it’s my birthday, which I share with such varied figures as Dave Matthews, J.K. Simmons, Kate Middleton, and Richard Nixon. It’s also National Word Nerd Day, which I find quite appropriate, in that I certainly consider myself a word nerd. Back in elementary school, my classmates used to tease me by accusing me of reading the dictionary for fun, but I didn’t understand what was supposed to be problematic about that. Of course I read the dictionary for fun! I love learning new words, discovering the origins of words and how they evolve, and exploring the way language works. That might have had a little something to do with why I studied linguistics in grad school, and why I became an author (and later, a publisher). Yep, I love words, and if you do too, add a few new ones to your vocabulary today. I can also recommend a book you might enjoy: Word by Word: The Secret Life of Dictionaries by former Merriam-Webster lexicographer Kory Stamper.

Image credit: Pixabay


Go to Source
Author: Joe Kissell

Sinkholes

A sinkhole in Duluth, Minnesota in 2011

Losing ground

As a California resident, I have experienced my fair share of earthquakes. They’re unsettling (literally and figuratively), and yet they’re something we have all come to accept as a normal occurrence in this part of the world. We like the climate, the views, the culture—in other words, the whole vibe of the area—and we simply accept that it has its…faults. We stockpile emergency supplies, buy earthquake insurance, perform seismic retrofitting on our buildings to reduce the risk of damage, and then go on with our lives. Each time a truly devastating earthquake hits—such as the great San Francisco quake in 1906 or the Loma Prieta quake in 1989—we learn some important new lessons, and soon thereafter feel much safer about the future.

Sometimes that safety is illusory. For example, on December 11, 1995, the ground moved in a very small area of San Francisco’s Seacliff neighborhood, swallowing Howard Billman’s $1.5 million house (which, back in those days, would have been considered a very expensive property—a mansion, in fact). In this case, however, the cause was not an earthquake, but a giant sinkhole measuring 200 feet (60m) across and 40 feet (12m) deep. Sometimes sinkholes are natural occurrences; in this case, however, the culprit was an old sewer tunnel that had deteriorated. The tunnel itself, of course, was much smaller than the resulting hole. That bit of surprising geometry is just one of the things I find interesting about sinkholes. The mechanisms by which they form, and the abruptness with which they appear—often without warning, and in seemingly unlikely areas—make them even more fascinating, in a rather grim way.

Feeling Depressed and Empty?

Generally speaking, a sinkhole is a depression in the ground formed when the top layer of earth collapses into an empty space below—a process also called subsidence. (If there were no empty spaces—only layers of earth, rock, sand, and water—the ground would remain solid.) The natural question is where those cavities came from in the first place, and the usual answer is that they were always there—unnoticed until the ceiling of stone above them gave way. So what causes a layer of stone to collapse?

Sinkholes are most often found in areas where the bedrock is limestone or another relatively porous stone. Water can dissolve limestone—especially if the water is slightly acidic (due to natural or artificial causes). If acidic groundwater seeps through small cracks in a layer of limestone and into an empty cavity beneath, it can carry away the dissolved rock; the cracks eventually become wide enough that the entire layer weakens and caves in. This type of process is responsible for many of the sinkholes found in Florida and Texas.

Throwing Out the Basement with the Bathwater

But sinkholes can occur for other reasons too. In some cases, the loss of groundwater (due to pumping for municipal water supplies, say) creates new cavities large enough and near enough to the surface that sinkholes result, even though the soil or rock above the cavity was not compromised. Even more interesting, though, is the way damaged sewer pipes or other tunnels, such as the one in Seacliff, create massive underground cavities. In a typical scenario, an old sewer pipe fills to capacity after a rain storm, and due to a blockage or collapse, begins leaking excess water into the surrounding soil. When the water begins to recede, it drains back into the sewer pipe, but now saturated with minerals that have eroded from the adjacent areas. Repeat this process often enough, or with a large enough volume of water, and the empty spaces formed where the soil has eroded away become large enough to turn into sinkholes.

Yet another cause of sinkholes is salt mining—though not always for the reason you might think. Abandoned salt mine shafts can and do collapse, but some sinkholes have a more indirect cause. In Cheshire, England in the late 1800s, salt was produced in great quantities by drying brine that was pumped out of the ground. This brine was a saturated salt solution that had formed by the contact of groundwater with layers of rock salt over many years. Because the solution was saturated, it could not cause further erosion of the salt, and therefore left in place naturally occurring salt pillars that supported the earth above. However, salt producers pumped out the brine so quickly and recklessly that the salt concentration began to decrease rapidly as fresh groundwater came in. The fresh water quickly absorbed the salt that formed the supporting pillars, and the ground collapsed, forming a great many sinkholes. But because the sinkholes were located far from the sites where the pumping took place (in some cases, several miles away), it was impossible for people who had lost their homes to the subsidence to pin the blame on any particular salt producer.

None of this is in any way reassuring. Regardless of the geological and topographical features of the ground you’re walking on, there’s always the chance that some unknown empty space—whether natural or artificial—lurks below, waiting to envelop you in a sinkhole on your very next step. It happens with tragic frequency, though usually not on a scale sufficient to merit widespread attention. But then, you can’t really prepare for a sinkhole the way you prepare for an earthquake, so maybe ignorance is bliss.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on December 2, 2004.

Image credit: KiwiDandy [Public Domain], via Flickr


Go to Source
Author: Joe Kissell

Earth’s Rotation Day

The Foucault pendulum at the California Academy of Sciences in San Francisco

Yes, I’m aware: the Earth rotates all day, every day. But on this date in 1851, Léon Foucault proved it scientifically by setting his eponymous pendulum in motion in Paris. You can see replicas of this pendulum in science and natural history museums all over the world. They’re kind of boring to watch, as they very slowly change direction and periodically knock over little pegs. But that subtle change in direction would occur only if the Earth is rotating. This simple invention was and remains a brilliant demonstration of what to many of us is a self-evident truth.

Image credit: BrokenSphere [CC BY-SA 3.0 or GFDL], from Wikimedia Commons


Go to Source
Author: Joe Kissell

The Steadicam

A Steadicam operator

When it comes to recording video, elements like composition, lighting, audio recording, judicious use of camera angles, panning, zooming, and so on all require practice and a certain degree of artistic sensibility; these do not come in the box with your smartphone, DSLR, or camcorder. And although virtually every modern video recording device for consumers has a built-in mechanism to compensate for the jitter caused by small hand movements, long handheld shots—especially those taken while walking, as is natural for camcorder users—typically include a fair amount of bounce and sway, making them look amateurish regardless of any other merits the recording may have. You almost never see such bumpy images on TV or in movies, unless a shot was made that way intentionally to achieve a certain effect. Professional videographers and cinematographers supplement their skill with a great piece of technology called a steadicam to smooth out the trickiest of handheld shots.

Steady as You Go

You can think of the steadicam as an extremely sophisticated shock absorber for a camera. Just as the shock absorbers on your car keep the ride smooth even when the road is not, so a steadicam keeps a camera steady despite bumps underneath. But unlike automotive shock absorbers, a steadicam must also compensate for pan (horizontal rotation), tilt (rotation up or down), and roll (rotation about the axis of the lens)—but only when these changes are not explicitly implemented by the operator. After all, a handheld shot that could only ever point in one direction would not be terribly interesting.

To do all this, a steadicam starts with a large, rigid harness or vest worn by the operator. Because the entire apparatus, including the camera, is typically quite heavy—sometimes as much as 90 lb. (about 40kg)—the vest spreads out the weight as much as possible. This not only minimizes fatigue, but also gives the camera a much more solid attachment point than the operator’s arms would provide. Protruding from this vest is a heavy-duty mechanical arm, somewhat reminiscent of the ones used in certain adjustable desk lamps. The arm has two rigid segments (called “bones”), connected by spring-loaded joints. The mechanism is designed in such a way that the bones always remain parallel to each other, but can move up, down, left, and right with the application of a small force from the operator. The camera itself (whether film or video) sits atop a postlike mounting assembly called a “sled,” which is attached to the end of the arm by a free-floating rotating joint known as a gimbal. The weight of the camera is counterbalanced partly by the tension of the springs and partly by other components mounted at the bottom of the sled such as batteries and a video screen that enables the operator to see what is being filmed while watching their step.

All these joints, springs, and weights shift the center of gravity away from the camera itself while providing multiple points where vibration and other unwanted movement can be isolated from the camera. The net effect is that the operator can walk, climb stairs, step over obstacles, or even jog while keeping the “floating” camera perfectly smooth. This makes possible shots that could never be achieved with a tripod or dolly, enabling the camera operator to walk among the actors freely while keeping all equipment out of the shot.

Gonna Film Now

The steadicam was invented in 1973 by cinematographer Garrett Brown, with its mainstream debut in the 1976 film Rocky. Since then, it has become ubiquitous in both film and TV. The most impressive use of the steadicam I’ve seen is the 2002 film Russian Ark, shot in the Hermitage, a former palace in St. Petersburg that’s now a museum. The entire 90-minute film was done as a single, continuous steadicam shot that followed the main character as he walked from one room to the next. Garrett Brown himself also famously used a steadicam to capture footage of California’s Redwood National Park in slow motion as he walked through it; when the footage was played at regular speed, it produced the backdrop for the speeder bike sequence in Return of the Jedi (1983).

As sophisticated as steadicams are, they are useless without a highly trained operator. A great deal of practice, not to mention stamina, is necessary to become proficient filming scenes with a 90-pound weight hanging from your chest. Steadicam operators, who generally work as freelancers and provide their own equipment, are both highly paid and well respected. Professional steadicam equipment is, not surprisingly, quite expensive (a complete rig can cost as much as a mid-range car)—though scaled-down versions for use with smaller consumer devices can be had for well under US$1,000. That money won’t make your vacation videos less boring, but your camera’s “off” switch was designed for just such a purpose.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on November 11, 2004.

Image credit: Mike1024 [Public domain], via Wikimedia Commons


Go to Source
Author: Joe Kissell

National Tempura Day

Tempura

Tempura refers to the Japanese technique of dipping food (usually vegetables or seafood) in a light batter and deep-frying it. There are lots of other ways to deep-fry stuff (with or without batter), but tempura has a distinctive texture and flavor. I don’t know if I’ve ever had tempura anywhere besides a Japanese restaurant, but it’s easy enough to make yourself if you’re so inclined.

Image credit: FASTILY [CC BY-SA 4.0], from Wikimedia Commons


Go to Source
Author: Joe Kissell

Arcosanti

Arcosanti

Building a rural city

While on a business trip in Scottsdale, Arizona in the early 1990s, I took a walk down the road from the hotel one afternoon and ran into a peculiar-looking place called Cosanti. This compound, an official Arizona Historical Site, is a collection of oddly shaped concrete structures, including large domes and apses made from earthen molds. The first thing a visitor notices is the multitude of handmade bronze and ceramic windbells all over the property. These are made in the foundry and workshops on the site and available for sale in the gift shop. But Cosanti is much more than a new-agey craft center. It’s gallery and studio of the late Italian-American architect and artist Paolo Soleri (who also lived there until his death in 2013). As the brochures on the counter explained, Cosanti is, among other things, a prototype for a much larger and grander construction project called Arcosanti.

City in the Wilderness

Located about 70 miles (110 km) north of Phoenix, Arcosanti is called an “urban laboratory.” What Soleri was testing in this laboratory for nearly 50 years is a concept he called arcology, a blending of architecture and ecology. His vision was to build a 25-acre city where 5,000 people can one day live, work, and play—comfortably, sustainably, and in harmony with nature.

Soleri believed that wastefulness and urban sprawl are among the great evils of the age, and he wanted to eliminate these problems with careful design. According to arcology, well-planned urban areas can use space much more efficiently and benefit from dramatically reduced energy requirements and environmental impact. This means, for example, eliminating cars, roads, and garages by putting all buildings within walking distance of each other. It also means creating multi-use spaces for maximum flexibility, and relying on solar and wind energy for most heating, cooling, and lighting.

Beyond the issues of consumption and pollution that plague the world’s urban and suburban areas, Soleri felt that people have become too detached from each other, and that an effective community requires more human interaction. Accordingly, Arcosanti has been designed with a large amount of shared living space (such as kitchens, gardens, and recreation areas). This seemingly benign fact sets off warning bells for Soleri’s critics, some of whom see Arcosanti as an immense commune, or worse—a cult-like organization. While the project does attract its fair share of New Age types, it also attracts many ordinary people for whom privacy does not necessarily mean a single-family house in a cul-de-sac. But if anything, Arcosanti’s biggest problem is that it hasn’t produced enough converts—or, to use a less loaded term, enthusiasts.

A Time to Build

When construction on Arcosanti began in 1970, Soleri expected it to be completed in 10 years, but less than 5% of the planned project has been completed to date. Construction is done by volunteers, who pay to live and work at Arcosanti during five-week workshops. Fewer than 100 people reside at Arcosanti at any given time, though the site receives more than 40,000 tourists per year. Much of the money used to fund the work comes from sales of the windbells and other pieces of art. But the money and volunteers are not plentiful enough to move the project along quickly. Before Soleri’s death, he left Arcosanti under the control of the nonprofit Cosanti Foundation, which continues to pursue Soleri’s vision.

Even if Soleri’s experiment in the Arizona desert proves one day to be fabulously successful, it will not necessarily signal a triumph of arcology over other forms of urban planning. What works for 5,000 people may not scale up to a city of millions; what works in a hot, dry climate may fail in colder, darker, and wetter areas. But the biggest roadblock of all is not technological, it’s psychological—convincing suburbanites that the cozy, interdependent community of a rural “city” is an improvement over the self-sufficient existence they’ve worked toward their entire lives. After all, arcology assumes that everyone will more or less like, respect, and work happily together with their neighbors. Sounds like a fantasy to me.

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on August 21, 2004.

Image credit: Carwil [CC BY-SA 4.0], from Wikimedia Commons


Go to Source
Author: Joe Kissell

National Smith Day

Granny Smith apples

When I saw that today was National Smith Day, I assumed it was a day to honor metalworkers. But no, it’s a day to recognize people whose name (or part of a name) is Smith. Wikipedia tells me it’s the most common surname in the United Kingdom, Australia, Canada, New Zealand, and the United States. January 6 was chosen as National Smith Day because it’s the birthday (in 1580) of Captain John Smith; the person who came up with the holiday in 1994 was Adrienne Sioux Koopersmith. (I remember years ago meeting a fellow with the last name Goldsmith, and when I asked what he did for a living, he told me he was an actual goldsmith. I thought that was funny, though centuries ago perhaps that would have been the rule rather than the exception.) In any case, give the Smiths in your life a high five today. And if you don’t have any, do the next best thing and eat a Granny Smith apple!

Image credit: Max Pixel


Go to Source
Author: Joe Kissell

The Right-to-Quiet Movement

Woman making the "shhhh" gesture

Shouting down excess noise

When I was in high school, I had an alarm clock that I truly hated. It was not merely loud, it was hideously, harshly loud. It sounded pretty much exactly like a smoke alarm, and had precisely the same effect: it scared me senseless every time it went off. I’d wake up, all right, but in such an anxious state that I came to associate the early morning with feelings of terror. Knowing a thing or two about electronics, I decided to perform surgery on the clock and modify it so that instead of making noise, it would flash a bright light in my face when the alarm went off. My modification worked—at least in the sense that the light flashed at the appointed time. What I hadn’t thought through was the fact that at the time the alarm went off, my eyes would be closed (and, more often than not, turned away and buried in a pillow), so while the light flashed merrily away, I kept on sleeping. My invention merely swapped the stress created by a noisy alarm clock for the stress created by being late for school.

Whether due to this adolescent trauma or for more mundane reasons of genetics or environment, I have had an aversion to noise almost as long as I can remember. My idea of a good time is visiting a library, cathedral, or desert location where the loudest sound is that of a page turning or wind blowing; my idea of torture is trying to write while someone is operating a leaf blower outside, having an otherwise quiet walk ruined by loud traffic, or trying to hold a conversation on a noisy train. If you were to plot my stress level on a graph alongside a graph of the ambient sound level, you’d probably find significant correlations. I used to think my preference for quiet was abnormal if not pathological, until one day I typed “quiet” into a search engine and came up with the Right to Quiet Society and the Noise Pollution Clearinghouse, two of numerous organizations dedicated to the promotion of quiet. There is in fact a rather large and diverse anti-noise pollution movement afoot, and being a fan of quiet, I find this notion extremely interesting.

Now Hear This

Broadly speaking, there are two main types of what is commonly called noise pollution: low-level, continuous background noise; and extremely loud but intermittent noise. Examples of background noise include radios or TVs left on all the time, appliances such as refrigerators and air conditioners, computers and other devices with cooling fans, and traffic sounds. Loud intermittent noises are things like planes flying overhead, leaf blowers, sirens, vacuum cleaners, and PA systems in clubs and concert venues. Typically the anti-noise groups focus on the second type of noise, citing extensive research on noise-related health concerns: hearing damage from extended exposure to high levels of sound; sleep loss; psychological trauma; and increased stress levels resulting in high blood pressure, aggressive behavior, and even suicide. But there is also a significant drive to reduce background noises, because even though they may not result in hearing loss, the cumulative long-term effect of low-volume but persistent unwanted sounds can have significant impact on one’s mental health and stress level.

It can be tricky business drawing the line between “sound” and “noise,” and even the most ardent anti-noise activists agree that context plays a significant role in determining what should be considered noise or, more specifically, noise pollution. Very loud sounds, however sonorous they may be, can cause hearing damage after a period of time, so it would be fair to call a Bach cantata “music” at 80 decibels but “noise” at 130. Likewise, I may enjoy listening to loud music at a concert, but the very same music at the same volume would be noise pollution if it’s occurring in the next room when I’m trying to sleep. On the other hand, there are loud noises that would not be called “pollution.” I want to be disturbed by noises like sirens, back-up alarms, or gunfire when they are necessary to alert me to danger. So the generally agreed-upon definition of “noise” is sound that is unwanted or distracting, and “noise pollution” is the term used for unnecessary, excessive environmental noise.

Crying For Silence

Anti-noise pollution groups have a wide variety of aims. Some concern themselves exclusively with aircraft noise in residential areas, for example; others seek more broadly to regulate any noise (factories, motorcycles, lawnmowers, watercraft, etc.) that threatens the peace and tranquility of the population. There are also movements to regulate workplace noise, to set and enforce safe standards for sound at concerts and clubs, and to reduce or eliminate background music at shopping malls, medical offices, and other public places. The overall message is that second-hand noise is a lot like second-hand smoke: it’s one thing if you want to damage your own health, but quite another to inflict noise on other people nearby who cannot escape it, and yet suffer because of it.

There are more examples of noise pollution than I can possibly list here; more appear every time I turn around. The problem is that most people have become so accustomed to constant noise that they simply don’t notice it anymore. You’ve probably seen signs asking you to turn off your phone in a museum or refrain from talking during movies—these requests must be made explicitly because otherwise it would simply never occur to many people that such sounds might be offensive. The biggest aim of the anti-noise pollution organizations is therefore simply to bring the dangers and annoyances of noise into the public awareness, at which point, they hope, a majority of people will be outraged enough to do something about it—either voluntarily or through legislation. I wish them, of course, the best of luck, though I can’t help noticing the irony of the squeaky-wheel effect: those who complain the loudest tend to get heard, and loudness is precisely the opposite of what anti-noise pollution activists stand for.

Epilogue: The Noisy American

I have traveled to many parts of the world, and based on what I’ve witnessed, I have developed a nearly foolproof metric for identifying Americans: the volume of their voices. English is not intrinsically louder than any other language, but Americans, as a group, tend to speak more loudly than any other nationality I’ve encountered in my travels—even if they’re speaking the local language. I’ve asked people in several other countries if this has been their experience as well, and so far, everyone has agreed with me. This is, of course, a gross overgeneralization, a completely unscientific and unfair one. But I have to wonder: could it be a simple matter of habitually compensating for what has become an incredibly high ambient noise level? How’s that? Oh, I said, “I HAVE TO WONDER…”

Note: This is an updated version of an article that originally appeared on Interesting Thing of the Day on July 7, 2004.

Image credit: Pixabay


Go to Source
Author: Joe Kissell