Friday, April 4, 2008

The Twist in the Tale

 

 

Electronic Circuits That Bend and Stretch

By Willie D. Jones

PHOTOS: JOHN A. ROGERS/UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

27 March 2008—Earlier this month, IEEE Spectrum reported on the development of bendable, twistable electronic circuits whose performance nearly matches that of conventional CMOS chips. The new circuits, developed by a team of researchers at the University of Illinois at Urbana-Champaign led by Professor John A. Rogers, are built from ribbons of silicon only a few nanometers thick that are mounted on flexible plastic substrates.

Today, in a report published online by the journal Science, the same group says it has developed an improved plastic circuit that is not only flexible but also stretchable and foldable. To make it foldable, the researchers looked at the behavior of everyday objects and observed that it's much easier to fold a magazine than a telephone book, says Rogers. So they decided to make the circuit much thinner.

The original recipe for flexible CMOS circuits comprised a 2- to 3-micrometer circuit layer sitting atop a plastic substrate as much as 100 µm thick. It could curve around a small roll of coins. But the new version has a total thickness of only 1.7 µm, including the plastic, which gives it the ability to wrap around a rod whose diameter is roughly 85 µm.

Rogers's group makes plastic circuits by transferring thin ribbons of silicon onto glue-coated plastic using a patterned rubber stamp. But before the ultrathin silicon layer is applied to the substrate, the plastic is heated, causing it to expand. Once the circuit layer is deposited and chemically bonded to the expanded substrate, the plastic is allowed to cool and contract. Relaxing the strain causes the circuit layer to buckle and form wavy patterns like the bellows of an accordion. It's the folds and wrinkles that give the circuit the ability to stretch and bend without breaking. Rogers says that in laboratory tests, the circuits, after a few hundred stretch-and-release cycles, showed no signs of fatigue.

 

PHOTOS: JOHN A. ROGERS/UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

SHRINK, WRAP: : This circuit sheet [LEFT] has not aged rapidly. The wrinkles make it possible to stretch the sheet over a sphere or an irregularly shaped object without breaking the sheet or damaging its current-conducting structures. To wrap circuits around a rod as thin as a strand of hair [RIGHT], the researchers made them superthin.

The Illinois researchers are proceeding with partnerships with physicians who are developing biomedical devices incorporating the circuits. For example, a clinical neurosurgeon at the University of Pennsylvania, in Philadelphia, is working with the Illinois team to create an implantable sensor that will monitor electrical activity in the brain to help predict the onset of epileptic seizures. The device may also work in reverse, sending electric pulses that head off the seizures. "This requires a device that will conform to the rippled geometry of the brain, because the deep creases in the lobes are where a lot of the action happens," says Rogers.

To Probe Further

Here's a movie showing the plastic CMOS circuit wrinkling up after being printed on plastic.

 

Wednesday, March 19, 2008

Prof. Razavi writes about devices talking @ 60 GHz ...


 
Gadgets Gab at 60 GHz
By Behzad Razavi
Image: Harry Campbell

People not only talk on the airwaves, they increasingly expect their gadgets to do the same. The trend began in the late 1990s with Bluetooth, which provided 1 megabit of data per second. Then Wi-Fi and the IEEE 802.11 standard pushed the rate to 100 Mb/s. Now ultrawideband systems are going five times as fast as that.

In principle, such radio links, operating over short ranges, could replace the cables that now clutter our homes and offices, eliminate the speed penalty of going wireless, and even allow portable devices to off-load computing work to a nearby base station. The devices could thus shed hardware to become smaller, lighter, and cheaper.

But it won't happen until engineers lay their hands on more bandwidth. The various 2.4- and 5.8-gigahertz systems now in common use are rapidly running out of spectrum, and the inflexible 100-microwatt constraint on ultrawideband power will likely limit it to about 1 gigabit per second. Where do we go next?

Clearly, we must look upward, but just how far up isn't so obvious. One tempting thought is to use really high frequencies—infrared light. Although that tactic works fine if all you want to do is switch TV channels with your remote or operate a wireless mouse, it turns out that it's hard to modulate the output of infrared light-emitting diodes fast enough for more demanding applications. So for the moment anyway, RF makes more sense, and the best prospects to be found there reside in 7 GHz of unlicensed spectrum near 60 GHz. Those frequencies are 10 times as high as anything in common use today, and with the bandwidth they provide they can carry a lot more data. Until now, engineers designing products for the consumer market have shied away from 60 GHz because of various technical difficulties, but bandwidth hunger is finally awakening their interest.

Developments in chip design have also played a part. Today's 60-GHz technology depends on relatively expensive and power-hungry gallium-arsenide semiconductors, but various researchers—including engineers at IBM, the University of California, Berkeley, and those in my group at University of California, Los Angeles—have shown that silicon chips can do the job with much less power and at a fraction of the cost. The silicon option is what makes 60-GHz communications attractive.

The allure is so strong that a special task force is now working on an extension of the IEEE 802.15.3 standard for wireless personal area networks in the 57- to 64-GHz band, and in 2006 a number of companies—including Matsushita, NEC, and Sony—came together to define a specification for transmitting high-definition video in this slice of the radio spectrum. Their group, called Wireless HD, of Sunnyvale, Calif., wants to link TV sets to disc players, video cameras, game consoles, laptops, and other devices at rates as high as 5 Gb/s—fast enough to transmit an HD feature movie in about a minute [see "May the Power Be With You"

There are other advantages besides extra bandwidth and faster data rates. Because the wavelengths are so short, the antenna needn't be much bigger than the head of a pin, small enough to go on the transceiver chip. Indeed, it is feasible to integrate many antennas and transceivers into a single chip so that together they can, with proper phasing, form a beam to steer transmissions in a particular direction [see illustration, "Adaptive Antennas" Such phased-array antennas can also be used to boost reception. These operations can be conducted automatically so that the sender and the receiver can find each other without human intervention, constituting an adaptive-array (or "smart") antenna system.

The integration of the antenna avoids the need for wires to carry signals to and from the chip, reducing the cost of packaging by one to two orders of magnitude. Further, the absence of exposed inputs and outputs makes the transceiver less vulnerable to electrostatic discharge during fabrication and assembly. Manufacturers could thus dispense with antistatic devices, which add capacitance and degrade performance.

These benefits do not come for free, however. Communication at 60 GHz involves significant challenges at the system, circuit, and device levels—challenges that account for why this bandwidth has lain fallow for so long. Designers can, however, get around these obstacles by taking advantage of the capabilities available at one level to relax the requirements imposed at another.

The difficulties begin with the propagation of the 60-GHz wave itself. As with any electromagnetic signal, the number of watts passing through each square meter diminishes in proportion to the square of the distance from the transmitter. On top of that, the size of the antenna scales with the wavelength, so its effective area—and so the power it can capture—varies in direct proportion to the square of the wavelength (and therefore in inverse proportion to the square of the frequency). Hence a signal broadcast at 60 GHz will convey to the typical receiving antenna just 1 percent as much power as it would have done had it been broadcast at 6 GHz. Making matters worse, 60-GHz rays are blocked by solid objects.

Photo: Behzad Razavi/UCLA

DESIGN TRICKS : allow transceivers 
to accom­mo­date bulky components—
for instance, by nesting square 
and octagonal induction coils one inside the other.

Some possible solutions are to transmit at very high power and to use adaptive-array antennas to send signals to their target by indirect routes, through reflection and refraction. It's better, though, to rely on lots of transceivers. If enough were strewn through an office—and even worn by the people who work there—any two devices would always be able to talk to each other directly or through a third node.

For this strategy to work, a transceiver must be made cheap enough, small enough, and frugal enough to run a long time on a small battery. These requirements are complicated by the 10-fold speedup in operating frequency. (Transistors have gotten faster, but not that much faster.) A major design overhaul will thus be needed. What's more, the connections between transistors have resistance, capacitance, and inductance, which tend to sap performance at these frequencies. And the high-speed, high-resolution, analog-to-digital and digital-to-analog converters needed to process gigabit-per-second data rates are real energy hogs.

The seriousness of these issues depends on just which integrated-circuit technology is used. Bipolar transistors made of silicon-germanium offer high speeds, and this technology makes it possible to fabricate high-quality passive devices, such as inductors, right on the chip, simplifying design and boosting performance. IBM has reported making 60-GHz silicon-germanium transceivers that can shoot data at 1 Gb/s for as far as 8 meters. But with multiple transceivers, analog-to-digital and digital-to-analog converters, and ever more complex signal processing, the cost of silicon-germanium becomes prohibitive.

CMOS chips are much less expensive, but the lower speed of the transistors and poorer quality of the passive devices make designing the circuits exceedingly tough. Nonetheless, the history of the semiconductor industry is littered with examples of products that were first built with bipolar technologies but were soon replaced with CMOS counterparts, suggesting that 60-GHz CMOS chips will in the end win out.

Designers clearly face high technical hurdles in fashioning CMOS circuits that can handle various RF-signal manipulations at 60 GHz. Many of these operations depend on heterodyning, in which the circuitry mixes two signals at different frequencies to produce an output that contains components at both the sum and the difference frequencies. A standard AM transmitter would, for example, multiply the relatively low-frequency audio signal to be broadcast (say, a 1-kilohertz tone) with the output of an oscillator running at a much higher radio frequency (say, 1000 kHz). The sum and difference of these two frequencies (999 kHz and 1001 kHz) fall just slightly above and below that of the RF oscillator. That's why the original audio signal is said to be "up-converted" to RF. The receiver for such an AM broadcast would typically use a similar oscillator to "down-convert" the RF signal back to audio, using exactly the same heterodyning principle.

For high-speed data communications at 60 GHz, such operations can be a nightmare to implement. For one thing, the oscillator would have to produce two 60-GHz outputs that are exactly 90 degrees out of phase. This is because the final modulated signal is produced by combining a sine and cosine. The generation and routing of these two phases, while maintaining a 90-degree difference between them, is hard at 60 GHz.

Also, controlling the precise frequency of a 60-GHz oscillator is tricky because it's running too fast to measure directly, as crystal-controlled frequency standards are limited to about 100 megahertz. The 60-GHz signal must first be fed into a frequency-divider circuit that reduces the frequency by a large factor (say, 600). Only then can the output be compared with a frequency standard, which indicates whether the rate of oscillation is faster or slower than desired, so that it can be corrected accordingly. The tactic is simple enough, but the limits on transistor speed make it difficult to fashion such frequency-divider circuits that work at 60 GHz.

Fortunately, with a little cunning, you can make a receiver work using a 40-GHz oscillator instead. The first step is to mix the output of the 40-GHz oscillator with the 60-GHz received signal. That operation down-converts the signal to the difference frequency: 20 GHz. To down-convert the signal the rest of the way, the receiver circuitry need not incorporate a separate 20-GHz oscillator; it can simply use the output of a divide-by-two frequency divider attached to its 40-GHz oscillator. Because it operates on 40 GHz rather than 60 GHz, such a frequency divider is comparatively easy to implement. What's more, it is less vexing to route signals around a chip at 40 GHz than at 60 GHz. And happily enough for designers, the transmission path can follow the receiver operations in the reverse order to avoid 60-GHz oscillators and frequency dividers. That is, the data stream to be transmitted is first up-converted to 20 GHz and only then raised to 60 GHz.

Using 40 GHz for the oscillator frequency is just one way to dodge some of the thorny problems posed by 60 GHz. The IBM transceiver takes a slightly different tack: it incorporates a 17-GHz oscillator followed by a frequency tripler to obtain 51 GHz, which is roughly 8.5 GHz below the target frequency. The 51 GHz thus can serve to down-convert the received signal to 8.5 GHz. And a divide-by-two frequency divider attached to the 17-GHz oscillator generates the 8.5 GHz needed for the second stage of down-conversion.

IMAGE: Bryan Christie Design

ADAPTIVE ANTENNAS: can form a beam and direct it to a target, greatly reducing the attenuation of power, a serious problem in the 60-GHz band, which is strongly absorbed by air. In a conventional antenna [left], the received power declines with the square of the distance. Adaptive antennas [right] use an array of emitters with delayed phases that make the waves' peaks and troughs add constructively [insert This trick focuses the power into a beam, which can then be steered up (a) or down (b) electronically. Advances in 60-GHz transceiver design now make it possible to fit an adaptive array on a single chip.

Although such designs avoid the need for a 60-GHz oscillator, they still require low-noise amplifiers and down-converters. These circuits typically use passive devices like inductors or transmission lines on the chip to overcome the speed limitations of the transistors. Alas, such passive components have large footprints, which normally forces them to be placed awkwardly far apart, their long interconnections creating lots of parasitic resistance, capacitance, and inductance. To alleviate this problem, designers can nest the inductor loops used to build these passive components so that the connections between them can be kept short [see illustration, "Design Tricks"

A bigger concern is how to fabricate transmitter circuits that can deliver a lot of oomph to the antenna. For communication across a range of 10 meters at data rates of several gigabits per second, some tens of milliwatts are necessary. Performed by a power amplifier, the task requires large transistors, which are typically slow. The good news is that the upcoming generation of CMOS chips, which boast 45-nanometer gate lengths, may be up to the job of producing this much power at 60 GHz.

But that's not the whole story, because not all the power that goes into an on-chip antenna gets broadcast. The silicon substrate—just 10 micrometers below—absorbs (and hence wastes) some energy, so such antennas radiate only a fourth to a half of the power supplied to them.

Perhaps more research will lead to more energy-efficient antennas. Meanwhile, engineers can resort to off-chip antennas that operate at these tiny wavelengths. Another, cheaper, solution is to incorporate enough transmitters and on-chip antennas to compensate for the power lost to the silicon substrate. The future will tell whether we need to resort to this rather inefficient solution.

It will take time for designers to master all this new technology, because the models that we use to simulate circuits can't easily handle 60 GHz. Today's transistor models are constructed as though all their capacitance and resistance came from small capacitors and resistors connected here and there. In reality, of course, the capacitance and resistance in these transistors are distributed over appreciable dimensions. So lumping things in this way fails to capture some important effects that manifest themselves most obviously at these high frequencies. Also, the electric and magnetic interactions between the passive devices and the silicon substrate are difficult to calculate from basic physical principles. For these reasons, modeling must rely on both the theoretical understanding of the behavior of the devices and on a large number of experimental measurements (which in turn can help refine the models).

The industry's vision is that we can solve these problems and that all the electronic devices in our homes and offices will be chattering furiously and wirelessly in another five to 10 years. Cables will go the way of the buggy whip. Now if the engineers at MIT can finally perfect their idea of using magnetically coupled resonators to charge batteries through the air, we'll eliminate those pesky power cords, too. Only then will we enter the real wireless age.

To Probe Further

The IBM 60-GHz silicon-­germanium transceiver is described in the Digest of Technical Papers for the 2006 IEEE International Solid-State Circuits Conference (ISCC).

Two CMOS transceivers for the 60-GHz band are described in the Digest of Technical Papers for the 2007 ISCC.

Detailed information concerning the IEEE 802.15.3 standard for the 60-GHz band is available at http://www.ieee802.org/15/pub/TG3c.html.




Sidebar 1

MAY THE POWER BE WITH YOU

Photos: LUCASFILM/20TH CENTURY FOX/THE KOBAL COLLECTION

A useful measuring stick for wireless transmission may be found in Star Wars: Episode IV (1977), the first in the Star Wars series. The six frames shown above are taken at equal time intervals from the movie's beginning to its end and are marked to show the portion that could be downloaded in the space of 2 minutes. Bluetooth (IEEE 802.15), at 2 megabits per second, would get a few seconds of the movie; the IEEE 802.11g standard would make it to the middle of the introductory narrative; and ultrawideband would get about a tenth of the way into the movie. Meanwhile, IEEE 802.15.3—the proposed standard for 60 GHz—would make it all the way to the final credit [bottom right


 

Friday, September 7, 2007

The Future of Music


          For fun, read the multimedia version of the article at - http://spectrum.ieee.org/print/5480

Some of you may find ideas for "Small Dreams!"

 

The Future of Music

By: Suhas Sreedhar

You're listening to your favorite Pink Floyd CD on your home stereo when you accidentally hit the "change CD" button on the control panel. All goes quiet for a bit as your CD player urgently shifts to play whatever is in the next tray. With dread, you desperately reach for the volume knob, but it's too late—your speakers blast the latest Green Day album. Reacting like you were just pricked by a pin, your hand jolts to the volume knob and turns it down. You breathe a sigh of relief. But that's not the end of it. Ten minutes later you feel that something isn't right. Even though you love this album, you can't listen to it anymore. You shut it off, tired, puzzled, and confused. This always seems to happen when you switch from a classic album to a modern one. What you've just experienced is something called overcompression of the dynamic range. Welcome to the loudness war.

The loudness war, what many audiophiles refer to as an assault on music (and ears), has been an open secret of the recording industry for nearly the past two decades and has garnered more attention in recent years as CDs have pushed the limits of loudness thanks to advances in digital technology. The "war" refers to the competition among record companies to make louder and louder albums. But the loudness war could be doing more than simply pumping up the volume and angering aficionados—it could be responsible for halting technological advances in sound quality for years to come.

Overcompression

The smoking gun of the loudness war is the difference between the waveforms of songs 20 years ago and now. Here is an example:

A waveform from the late 80s / early 90s

 

A waveform from now

The second waveform not only has a higher amplitude than the first but is also highly compressed—there is very little difference between its highest points and the average level. In other words, the new song has a drastically reduced dynamic range—the difference between the loudest parts (the peaks) and quietest parts of the sound.

Music, like speech, is dynamic. There are quiet and loud moments that serve to accentuate each other and convey meaning by their relative levels of loudness. For instance, if someone is talking and suddenly shouts, the loudness of the shout, in addition to the content, conveys a message—be it a sense of urgency, surprise, or anger.

When the dynamic range of a song is heavily reduced for the sake of achieving loudness, the sound becomes analogous to someone constantly shouting everything he or she says. Not only is all impact lost, but the constant level of the sound is fatiguing to the ear. So why is achieving greater and greater loudness so important that the natural ebb and flow of music has been so readily sacrificed?

The answer goes back to the beginnings of recorded music.

The Vinyl Era

Loudness has always been a desirable quality for mainstream popular music. The louder a song is overall, the more it stands out from ambient noise and the more it grabs your attention. Studies in the field of psychoacoustics, which investigates how humans perceive sound, show that people judge how loud a sound is based on its average loudness, not its peak loudness. So even though there might be two songs whose loudest parts reach the same loudness level in decibels (dB), the one with the higher average level is generally perceived as louder.

As far back as the early 1960s, record companies began engaging in a loudness battle when they observed that louder songs in jukeboxes tended to garner more attention than quieter ones. To maintain their competitive edge, record companies wanted to keep raising the loudness of their songs. But the physical properties of vinyl records limited engineers' ability to perpetually increase loudness.

A vinyl record consists of a lacquer into which small V-shaped grooves—vibrational transcriptions of analog sound—are cut. Creating a record in the studio involves a process called mastering, where songs are sonically adjusted and placed into an appropriate order to fit the given medium's requirements. Mastering for vinyl was always a balance between loudness and playing time. The louder you wanted a song to be, the wider the groove needed to be in order to accommodate the larger amplitude of the transcription. Since there's only a limited amount of usable surface area per vinyl disc, gaining loudness meant sacrificing playing time, especially on a long playing (LP) record where upwards of six songs were often fit on each side of the disc.

In order to save the cost of manufacturing an excessive number of vinyl discs per album, playing time usually won out over loudness. Live music typically has a dynamic range of 120 dB, peaking at about the same loudness of a jet engine (though some concerts have gone even louder). Vinyl records tend to have about 70 dB of dynamic range. This meant that in order to fit a song onto a record, it either needed to have its overall amplitude reduced or it needed to be compressed—have its peaks brought down to a lower level—to fit within the given range. How much of each was done varied from record to record and defined the art of mastering. However, the tools of analog signal processing limited the amount of compression that was possible.

Analog compressors of this era were basically voltage control amplifiers that varied the level of an output signal based on a control voltage, similar to the devices that regulate signals in AM radio. Such compressors were typically used on individual instrument tracks (vocals, guitar, etc.) to add clarity to a sound or to change the sound of an instrument for effect. However, in some cases, such as with the hit singles put out by Motown Records (for example, "Want Ads" by Honey Cone), compressors were used to boost the loudness of songs to higher-than-average levels. Mastering engineers accomplished this by reducing the dynamic range of a song so that the entire song could be amplified to a greater extent before it pushed the physical limits of the medium. This became known as "hot" mastering and was typically done on singles where each side of the record only contained one song. Generally, however, the average level of songs and albums stayed relatively the same throughout the period.

"CD" Behavior

"The invention of digital audio and the compact disc became a new fuel for a previously existing loudness race," says Bob Katz, a renowned mastering engineer and one of the first outspoken critics of dynamic-range overcompression. "The reason is that the analog media did not permit what we would call 'normalization' to the peak level."

When the compact disc (CD) was introduced in the early 1980s, there was much for audiophiles to be happy about. Digital audio removed many of the physical restrictions vinyl had imposed, such as concerns about surface noise (caused by dust, scratches, the lacquer itself, and so on) and limited dynamic range. The CD was capable of supporting a dynamic range of about 96 dB. For most of the 1980s, when CDs were still high-end products and mastering engineers largely did not have access to digital signal-processing technologies, albums released on CD tended to make use of this better dynamic range.

Unlike vinyl, which had varying loudness limits due to its physical characteristics, the CD had a definitive peak loudness limit due to its specified digitizing standard, a form of pulse code modulation (PCM). PCM had previously been used in telephony as a method of digitizing an analog signal. When an analog signal is sampled for digitization, each level of the signal is quantized (stored as a number in binary). How frequently samples of the signal are taken is specified by the sampling rate, and the total number of unique quantization levels capable of being stored is determined by the number of bits. When Sony and Philips specified the standard for CD audio, they determined that the sampling rate would be 44.1 kHz with 16 bits per sample. Using the rule of thumb, the approximation of 6.02 dB of dynamic range per bit gave CD audio roughly 96 dB of dynamic range. The highest loudness level (16 bits of all 1's) was designated as 0 decibels full scale (dBFS). Lower levels were assigned negative numbers.

In the 1980s, CDs were mastered so that songs generally peaked at about -6 dBFS with their root mean square (RMS)—or average levels—hovering around -20 dBFS to -18 dBFS. As multidisc CD changers began to gain prominence in households toward the end of the decade, the same jukebox-type loudness competition started all over again as record companies wanted their CDs to stick out more than their competitors'. By the end of the 1980s, songs on CDs were amplified to the point where their peaks started pushing the loudness limit of 0 dBFS. At this point, the only way to raise the average levels of songs without having their loudest parts clipped—the digital equivalent of distortion, where information is lost because it exceeds the bit capacity—was to compress the peaks.

While analog compressors had been limited in the extent to which they could reduce peak levels, digital compressors were much more powerful. As mastering engineers began to get hold of digital signal-processing tools, they were able to "hot" master songs even more. The process was similar to what had been done on some vinyl singles—peak levels were brought down by a certain amount, and then the entire waveform was amplified until the (now reduced) peaks once again reached 0 dBFS. The result? The average level of the entire song increased.

The 1990s saw average amplitude levels go from around -15 dBFS to as much as -6 dBFS in extreme cases. Most songs in this decade, however, remained at around -12 dBFS. The 2000s saw the loudness war reach its height, with most current songs having an average level of -9 dBFS or higher. From the mid 1980s to now, the average loudness of CDs increased by a factor of 10, and the peaks of songs are now one-tenth of what they used to be. The loudness war is also not just confined to the big four record companies (Warner Music Group, EMI, Sony BMG, and Universal Music Group). Overcompression is now widespread and performed by independent labels and international record companies.

The CD Is Dead; Loudness Continues

The biggest change from 15 years ago to today is how people consume music. With more than 100 million iPods sold worldwide as of early this year, more and more people are listening to music on the go rather than at their home stereos. Physical media like CDs are on their way out. And yet overcompression continues to plague the music world.

Even though the CD might be in its death throes, most digital music available online was mastered for CDs. Popular formats like MP3, AAC, and Free Lossless Audio Codec (FLAC) merely use data-compression techniques (not to be confused with dynamic-range compression) to reduce the amount of data a song encoded in PCM takes up. As long as the specter of CDs continues to haunt the online world, downloaded songs will still be subject to overcompression.

But the problem doesn't just lie on the production end. If people are listening to songs in a noisy environment—such as in their cars, on trains, in airport waiting rooms, at work, or in a dormitory—the music needs to be louder to compensate. Dynamic-range compression does just that and more. Not only does it raise the average loudness of the song, but by doing so it eliminates all the quiet moments of a song as well. So listeners are now able to hear the entire song above the noise without getting frustrated by any inaudible low parts.

This might be one of the biggest reasons why most people are completely unaware of the loss of dynamics in modern music. They are listening to songs in less-than-ideal environments on a constant basis. But many listeners have subconsciously felt the effects of overcompressed songs in the form of auditory fatigue, where it actually becomes tiring to continue listening to the music.

"You want music that breathes. If the music has stopped breathing, and it's a continuous wall of sound, that will be fatiguing," says Katz. "If you listen to it loudly as well, it will potentially damage your ears before the older music did because the older music had room to breathe."

Some audiophiles find relief by going back to the past. A few musicians still continue to release their albums on vinyl records (in addition to CDs and online formats). Because vinyl cannot support the loudness that CDs can, these modern vinyl releases are much quieter than their CD counterparts. But they are often less compressed as well, and, in some instances, remastered in a way that is as dynamic as albums released in the 1960s and 1970s.

One of the most prominent examples of this is the recent Red Hot Chili Peppers album Stadium Arcadium, which was remastered for vinyl by mastering engineer Steve Hoffman with the intent of providing full dynamic sound. Hoffman is one of the few mastering engineers who have actually refused to take certain jobs because he's been asked to overcompress music. "[It happens] all the time," says Hoffman. "At least once a week."

But turning to vinyl for uncompressed music might not always provide salvation. In order to save the cost of remastering, record companies might simply take the compressed master of a song, reduce the overall loudness, and place it on vinyl. Katz warns, "You could take the Red Hot Chili Peppers recording and put it onto vinyl just as it came from CD, and it would sound just as fatiguing. [The only difference is] you'd just have to turn the volume control up because you couldn't get the peak level the same."

Tearing Down the Wall

Audiophiles looking to the future for relief from overcompression see a cloudy picture. DVD-Audio and Super Audio Compact Disc (SACD) are two high-fidelity formats that were thought to be solutions to the loudness war. Both formats offer not only a greater dynamic range than CD but also higher sampling rates. This allows for frequencies higher than what most humans are capable of hearing to be encoded onto the medium, addressing a common complaint by people who prefer analog over digital because they claim they can hear these frequencies.

DVD-Audio uses PCM encoding that can support 24-bit, 192 kHz stereo sound (contrasted with the CD's 16 bit, 44.1 kHz) yielding 144 dB of dynamic range, 14 dB over the human threshold of pain. SACD, like the CD, was developed by Sony and Philips and uses a form of pulse-density modulation (PDM) encoding branded as Direct Stream Digital. Basically, instead of having 16-bit samples at a frequency of 44.1 kHz, it takes 1-bit samples at 64 times that rate (2.82 MHz). It has a dynamic range of about 120 dB. Additionally, both SACD and DVD-Audio are capable of high-fidelity five-channel surround sound.

Since their introduction in 2000, however, neither format has taken hold. An overwhelming majority of releases have been of the classical music genre, which has generally not been subject to overcompression to begin with. So even if audiophiles wanted to spend upward of $300 for a DVD-Audio or SACD player, chances are they won't be able to buy their favorite popular albums in either medium.

Since music has gone online, the possibility of having high-fidelity digital files remains, and formats such as FLAC are capable of supporting 24-bit audio. Slim Devices, a company acquired last year by Logitech, has created two products—the Squeezebox and the Transporter—that wirelessly stream digital files from a computer or the Internet to high-end stereo receivers. Both are capable of handling 24-bit audio, but the problem, says Sean Adams, former CEO of Slim Devices, is lack of content.

"If we're going to go to higher levels of sound quality, the real problem is actually getting the content out. Right now, unfortunately, the industry has kind of gone backward from CD quality. When MP3 came out, [it was called] CD quality when it really wasn't," says Adams. "We've made some improvements since then with better [compression techniques], but it's really a function of people demanding better sound quality. That has to happen first before the [recording] industry's going to start producing it."

Overcompression, however, seems to be one of the biggest obstacles to overcome. With music being compressed to have smaller and smaller dynamic ranges, the need for the next high-fidelity audio format vanishes. If record companies aren't making use of the full dynamic capability of CDs, then why bother moving to another format with even more potentially unused capability? And with the average consumer being either completely unaware of, or only subconsciously irritated by, the current state of overcompressed music, there is little incentive for sound quality to progress. Consequently, all the potential benefits of higher-quality audio—lifelike dynamic range, greater frequency response, and multichannel surround sound—remain unseen, even though the technology exists today. Audiophiles are forced to return to vinyl and analog recordings that should have been obsolete 20 years ago.

But there might still be hope for getting out of the loudness war. RMS (average) normalization algorithms, such as Replay Gain, have been implemented in many digital audio players and work to bring all songs in a digital library to the same average level. With Replay Gain enabled, songs originating from many CDs are processed and played back at a consistent average level of loudness. This helps listeners because they no longer have to adjust their volume each time they go from one album to another. And while such normalization cannot undo the compression of music (it amplifies or reduces the song in its entirety), it counteracts any efforts that were put in to make one song louder than another, essentially nullifying the loudness war altogether.

Many hope that widespread implementation of technologies like Replay Gain will make record companies see that further and further compression in the name of competitive loudness is a feckless task, and slowly but surely popular music will begin to return to a dynamic, less-compressed state. In fact, many digital audio players have caught on; Winamp uses Replay Gain, and iTunes has its own normalization option called Sound Check, which also works on iPods.

Whether the loudness war can end and give rise to the next generation of high-fidelity audio depends heavily on the attitudes of consumers. Unlike the CD and DVD video, there is no overwhelming industrial push toward the next level of sound quality. How songs and albums will sound will depend entirely on whether or not the listener actually cares about the intricacies of the music.