When we talk about the ‘quality’ of audio systems, we often have the loudspeakers in mind. In that case, only after we have discussed the loudspeakers do we usually start to discuss power amplifiers, microphones, microphone pre-amplifiers, digital signal processing hardware and hardware… all components that involve moving electrons.
The invention of the Internet Protocol (IP) address changed the world. Devices on a network could now be labelled with a number so that they could address each other. This meant that, instead of broadcasting data to all other devices on a network, they could use an IP address to send data packets to a specific receiver.
DIGITAL VS ANALOGUE: A LIFE CHANGING EXPERIENCE - BY ROBIN JOHNSON
Two weeks ago, Christoph Haertwig wrote about how the transition from analogue to digital audio has affected the professional recording industry. This time, I hope to complement those words with the view from the amateur audio world…
AUDIO NETWORK SECURITY: INTRODUCING DANTE DOMAIN MANAGER
Audio networking has brought many improvements to the world of professional audio: virtually limitless channel counts, minimal and easy cabling, simple redundancy - just to name a few. It has also brought us new functionality that didn’t exist before, such as the separation of functional and physical connections (the network as a patch panel), storing and recalling patches in memory, monitoring the status of individual inputs / outputs and sharing the same infrastructure for other functionality, such as user interfaces, lighting and video.
DIGITAL VS ANALOGUE: GREENER GRASS GROWS ON BOTH SIDES - BY CHRISTOPH HAERTWIG.
When I started as a recording engineer in the 1990s, analogue audio was already on the verge of being replaced by digital technology and a Hamburg based programmer named Charlie Steinberg was just about to break through with his (later patented) Virtual Studio Technology (VST). The idea of putting everything that used to live in racks and machine rooms into the computer sounded like a great idea. We anticipated every single piece of development news, as we were fed up cutting tape, adjusting the machines, living with breaking external devices and not being able to quickly recall settings.
Yamaha HISTORY: PERSONAL COMPUTERS
When the microprocessor chip was introduced to the world in the 1970s by Intel, Motorola, Zilog, MOS technology and others, alongside the emerging ‘personal computer’ industry, the innovative musical instrument market was among the first adopters of this new technology. A few decades later, there’s hardly any electronic music equipment on sale that doesn’t include a microprocessor.
THE CUSTOM CONFIGURABLE UI FOR LIVE MIXING
In the previous micro tutorial, two changes in the workflow of live sound engineers were highlighted - the division of ‘basic’ and ‘sound’ processing and trouble-free infrastructure. A third change has now become an important part of the live sound marketplace: the transition of manufacturer-designed user interfaces to custom-configurable ones.
THE TWO CHANGES IN LIVE MIXING WORKFLOW.
Compared with 10 years ago, many innovations changed the way we use mixing systems. For example: increased DSP power and i/o infrastructure removed virtually all of the constraints we had got used to in the past, making the workflow processes and procedures to cope with these constraints redundant. Instead, new workflow concepts emerged to support system designers and operators to manage the huge size, power and complexity available in contemporary mixing systems. This blog presents two workflow concepts.
CLASSIFICATION OF NETWORKED MIXING SYSTEMS: POWER AND I/O SCOPE.
Thanks to the innovations in digital mixing and gigabit networking - together constituting ‘networked mixing systems’ - audio productions have evolved to unprecedented sizes and quality. This raises the issue of selecting a networked mixing system with enough capacity and functionality to fit a planned audio production. Brochures of mixing system manufacturers often boast of high channel counts and powerful DSP, but not always in a consistent way. To facilitate meaningful assessments, this micro tutorial proposes a classification method to obtain a rough ‘system scope’ assessment based on two properties: processing power scope and i/o scope. These properties are simplified, modelled representations of a system’s capabilities, intended for a rudimentary classification only. Once it comes to details, the manufacturer’s specifications have to be studied, of course.
TECHNOLOGY INNOVATIONS IN LIVE SOUND.
‘Senior’ sound engineers - let’s say those older then 40 - remember when the multicore cable that connects the Front Of House (FOH) mixing console and the stage was a clunky ‘snake’, several centimetres thick, which comprised multiple pairs of conductors. The longer the multicore and the more channel pairs it had, the heavier it became, requiring several people to carry it around and roll it out. We learned to cope with the down sides of analogue cabling; longer cable lengths caused the signal’s high frequencies to be suppressed and required heavy per-pair shielding to prevent crosstalk.
LOUDSPEAKER POWER SPECIFICATIONS.
Loudspeaker drivers convert electrical power into acoustic power by means of an electro-magnetic process: a coil hovers in a magnetic field, generated by a fixed magnet, and an electric alternating current through the coil makes it move up and down. For low-power loudspeaker drivers, ‘ferro-magnetic’ materials are used for the magnet; for more powerful loudspeakers, scarcer - and therefore more expensive - Neodymium magnets are used. In both cases, the moving coil is attached to a cone made of paper or plastic, which transmits pressure waves into the air. A wooden or plastic cabinet is built around the driver to optimise the electric to acoustic power conversion process and to shape the response in frequency range and dispersion.
Sound reinforcement systems use components that radiate air pressure waves to the audience, components which we know as ‘loudspeakers’. Most applications use the so called ‘point source’ type; loudspeakers constituting one or more drivers mounted in a cabinet that radiates - or disperses - waves as much as possible to the front of the cabinet. In practice however, the dispersion pattern is usually broad for low frequencies, narrowing towards high frequencies. This dispersion behaviour is not ideal, having three disadvantages:
ISM WIRELESS MICROPHONES
Most audio connections in live sound applications and use cables to carry audio signals - analogue, digital or networked - from sound sources to stage boxes, from stage boxes to mixers and from mixers to power amplifiers. However, one category of sound source is often connected through radio waves: microphones. Although a cable is the most secure way to connect a microphone, with the highest audio quality, there’s a practical reason to use radio: freedom of movement of the performer. This also goes for worn musical instruments, such as guitars.
Modern audio network protocols are often based on gigabit Ethernet technology, supporting channel counts that we couldn’t dream of 20 years ago. Even a small-scale network is capable of transporting thousands of channels. Because of the full addressing that is inherent to the Ethernet protocol, the bonus is that the network acts as a routing matrix with millions of patch points. There are multiple protocols available, all supporting 24-bit audio streams, some even 32 bits, and high resolution sample rates - so audio quality is not an issue. Timing is assured as all Ethernet protocols use the Precision Time and Quality of Service Protocols, while software and hardware phase-locked loop (PLL) technology has evolved to reduce transport jitter to inaudible levels.
Back in the seventies the Palo Alto Research Center in California, USA (www.parc.com) developed some nifty computer technology such as the mouse, the laser printer and computer networks. From the first versions of networks such as Aloha-Net and ARPA-Net the Internet has evolved. Robert Metcalfe, first working at PARC and later founding his own company 3COM, developed a practical networking standard for use in offices called Ethernet. More than 40 years later the whole world is using this standard to build information systems, and all personal computers, smart phones, tablets and also many professional audio products sold today have some form of Ethernet port built in. The Ethernet protocol is standardized as 802.3 by the IEEE standards organization.
THE BENEFIT OF STANDARDS IN THE AUDIO INDUSTRY
Standards have been part of our civilisation ever since human life evolved on Earth. One of the most obvious examples is language; the fact that a group of human beings can exchange information with each other relies completely on the concept that all individuals within the group share the same language. Countries worldwide institutionalised their languages in their educational systems, maintaining standards on syntax and vocabulary, teaching students in schools. Often, languages of neighbouring countries are also included in the school curriculum to ensure cross-country compatibility of communication.
THE SIX RULES OF AUDIO NETWORK TROUBLESHOOTING
Manufacturers of audio networking equipment often promise a ‘plug and play’, hassle-free user experience, with a ‘sky’s the limit’ channel capacity. If you keep things simple, in most cases it’s true. Of course, once you start to make things more complex - for example by combining multiple brands and product types in a large distributed system - things start to get more complicated. Still, when systems are designed with care and enough forethought, it’s entirely possible to make things work - but the paradigm changes from ‘plug and play’ to ‘think, plug and play’.
THE EVOLUTION OF THE DIGITAL MIXING CONSOLE
Charles Babbage’s mechanical ‘Analytical Engine’, designed in 1833, is regarded as one of the first computing devices. A century later, using electro-magnetic and vacuum tube technology, the massive, iconic, electronic computing machines appeared, easily filling a large hall. After the invention of the transistor, with innovative companies starting to combine them into large scale integrated circuits in the late 1970s, the ‘micro computer’ took off. With Apple and IBM as innovators, these drastically changed the world we live in. In their slipstream, musical instrument and professional audio manufacturers followed by including microprocessors to produce ‘digital audio’ products, eventually rendering most analogue technology obsolete. The big market transition of analogue mixing consoles to digital ones happened at the turn of the century.
THE EVOLUTION OF THE (ANALOGUE) MIXING CONSOLE
Human beings started to make music together long before electronics changed our way of life forever. When multiple musicians send out their sounds as air pressure waves, the air adds all individual sounds to a mix to end up at the listener’s ear, with levels depending on each performer’s distance from the listener.
Many professional audio products today use Dante to send and receive audio. A frequently asked question is ‘Can we go wireless?’ For the answer we have to go back to the early years of Ethernet in the 1970s, when Robert Metcalfe of the Palo Alto Research Centre in California, USA, proposed a method of sending and receiving data packets between multiple devices connected to a shared conductor: a coax cable. The more devices that were connected to the coax cable, the higher the probability was that two devices would, by accident, start to send packets at the same time, distorting each other’s messages.
In many background music (BGM) and paging applications, multiple loudspeakers are used. Often mounted in the ceiling, these are low-powered units because they are located close to the listeners walking or sitting below and the nature of BGM is that it is low volume. The listening area of a BGM system is divided in ‘zones’, e.g. a corridor, canteen, waiting area, etc. The purpose of the sound system is to create a uniform sound within each zone. ‘Uniform ‘means that if anyone walks through the zone, the sound pressure level (SPL) fluctuation - or ‘spread’ - should be limited so that it’s not clearly noticed by the listener.
DYNAMIC RANGE (PART TWO): THE ANALOGUE PART OF DIGITAL AUDIO SYSTEMS.
Theoretically it would be possible to feed audio signals directly to the human brain. It’s just a matter of coding the audio into about 60,000, 600Hz-average pulse sequences, then using a bio-electrical interface to connect them directly to the auditory nerve string to the brain stem.
DYNAMIC RANGE PART ONE: ANALOGUE TERMINALS
Although, in many cases, sound comes to us in a heavily compressed form, dynamics are one of the most important factors in the life of a recording or live sound engineer. Whether it's Metallica playing Enter Sandman or a string orchestra playing Arvo Part’s Summa, both can only be reproduced by systems having high dynamic ranges.
THE HUMAN HEARING SYSTEM PART 3: LISTENING
When it comes to assessing an audio system’s sound quality through listening, the human auditory system makes things extremely difficult. The sensation of hearing is affected not only by the audio system, but also by many other factors. These include the quality of the sound source, the acoustic environment, the listening position and angle, the individual’s hearing abilities, preferences, expectations, short term / long term memory and all the other human senses; sight, taste, smell and touch.
THE HUMAN HEARING SYSTEM PART 1: THE HARDWARE
One of the most amazing pieces of audio hardware can be found in pairs: everybody has two ears. The most visible part of the human hearing system is the ear shell, which passes acoustic sound pressure waves though the middle ear canal to the eardrum, which sits a little deeper inside the head. The eardrum then converts the air pressure level to a smaller surface using a mechanical construction that boasts the smallest bones in the human body. The stapes is the final bone of this construction, passing the converted pressure to a liquid, housed in a miniature rolled-up tube: the inner ear, or cochlea. The small surface where the stapes bone touches the cochlea’s liquid is called the ‘oval window’; it’s the internal audio input connector of our inner ears.
DAW TECHNOLOGY: HARD DISKS.
The first commercially-available hard disk was brought to the market by IBM in 1956. It comprised of a bundle of 50 disks, each 24” wide, weighed about a ton and stored almost four megabytes (MB) of data. In theory, a few seconds of a high quality audio signal could be recorded on this machine. However computers in 1956 were not fast enough to do that. It would take until the early 1990s before the first computer-based Digital Audio Workstations (or DAWs) replaced the tape recorder, using a regular computer’s hard disk to store multichannel audio.
ACOUSTICS PART TWO: ACOUSTIC ENHANCEMENT.
Building a concert hall for a certain kind of performance has the advantage that the architect can optimise the interior shapes and surfaces to provide a perfect acoustic environment. 100 fine examples are described by Leo Beranek in his book ‘Concert Halls and Opera Houses’. However, for most venues the economy of scale dictates that several kinds of performances should be possible: from a singer-songwriter with acoustic guitar in the middle of the stage up to a full philharmonic orchestra.
ACOUSTICS PART ONE: ROOM ACOUSTICS.
Imagine a James Bond episode far into the future; the scene shows James using a ‘jetpack’, hovering 100 metres above an endless snowy, Antarctic landscape. Using a similar jetpack, the inevitable villain hovers just 10 metres away, ready for combat. It’s the future so, instead of the roaring rocket fuel-powered jetpack James used for his escape after killing Spectre agent No.6 in Thunderball back in 1965, these ones use artificial gravity. This means the imminent scuffle will be in complete silence. No doubt James will emerge as the winner, of course!
POWER AMPLIFIERS PART TWO - HANDLING POWER
A power amplifier’s job is to supply a voltage to a loudspeaker and then to deliver whatever current the loudspeaker needs to move its voice coil, producing airwaves with a sound pressure level (SPL) up to what is specified for the speaker and/or application. The output voltage multiplied with the resulting current constitutes the power that is delivered to the loudspeaker.
POWER AMPLIFIERS PART ONE: AMPLIFIER CLASSES, AKA ‘THE BEAUTIFUL INVENTION OF THE TRANSISTOR’.
Electrical power is the product of voltage (V, measured in volts) and current (I, measured in amperes, often shortened to amps). In electrical circuits in analogue mixers and processors, the voltage is most commonly set to around ±15V to achieve a good signal to noise ratio. To prevent components from heating up too much, currents are kept low; in the milli- and micro-amp range, so the power consumption is in the milliwatt range.
THE COST OF QUALITY.
Some products are more expensive than others, with investors linking a product’s cost to its perceived quality. It’s no different in the professional audio world. In this micro tutorial, costs are linked to three quality related properties: functionality, design and build quality.
MAKING SURE IT WORKS: REDUNDANCY
Sound systems are often used in critical applications - for example in large scale sound reinforcement systems where a total system failure would result in a large audience having to be sent home, or in voice alarm systems where system failure is simply not an option. For these reasons, a degree of redundancy can be designed into audio systems.
SPREADING THE WORD: THE 6DB DECAY PER DOUBLE DISTANCE CHALLENGE.
When a sound source moves closer, the sound gets louder and vice versa - everybody knows that. It’s one of the main challenges to cope with in the design of sound reinforcement systems, so engineers soon learn about the ‘6dB per double distance’ rule that says it all. The rule is based on a simple physical concept: the distribution of acoustic energy.
Many of nature’s physical phenomena such as earthquakes, light and sound span a huge range of possible levels. Although an amazingly large portion of these ranges can be perceived by the human sensory systems, human perception has a strange property: it is often non-linear. It’s one of the reasons why, just as Richter did with earthquake strength, Alexander Graham Bell introduced a logarithmic representation of a physical phenomenon. In his case it was electrical audio transmission power over telephone lines, which we now know as the ‘Bel’. This ‘power quantity ratio’ representation sets a reference power level Po, and then relates the actual power under discussion to the reference as a decimal logarithmic ratio. The same concept can be applied to sound intensity - which is also a power quantity. The resulting parameter is actually a very useful representation of how humans perceive power and intensity.
THE GOLDEN AGE OF WIRELESS.
There were times when television sets with remote controls - constituting a small box with push buttons and a long wire connecting it to the television set - were advertised. Soon infrared light was used to replace the cable and the remote control boxes became smaller and more lightweight - technology which is still used for most remote controls.
DIGITAL AUDIO FREEDOM – WELCOME TO THE WORLD OF YGDAI (AND A HOMAGE TO ELECTONE)
In 1991, Yamaha launched its first mass produced eight-channel, fully digital mixing console with motorised faders, the DMP7D, which was preceded by the analogue i/o model DMP7. On the back of the DMP7D, a pair of XLR connectors were labelled ‘AES/EBU’ and a pair of RCA connectors were labelled ‘CD/DAT’. So far an excellent example of the perfect implementation of the available standard digital audio formats in the early 1990s.
MECHANICAL MUSIC REPRODUCTION IN THE MODERN AGE: THE DISKLAVIER.
This week an odd topic: the mechanical reproduction of music - not using a loudspeaker, but via a mechanical movement directly to create acoustic sound waves. For example, by blowing a pipe or triggering a resonating body such as a drum head or a string. Normally, this requires a human to play an instrument: blow a pipe, hit a drum, pluck a string. But if you look up ‘street organ’ on Wikipedia you’ll find ‘barrel organs’ popping up in the 18th century, with operators called ‘organ grinders’. These were often accompanied by a monkey, producing acoustic music by means of mechanical reproduction. At the end of the 19th century, the ‘Pianola’ became a common reproduction method, peaking in popularity in the roaring Twenties. Shortly afterwards the radio and the electrical phonograph put an end to the phenomenon of mechanical music reproduction.
HOW DID WE LAND ON 16 AND 24 BITS IN DIGITAL AUDIO?
Digital audio reproduction started to take off in mass production products with the launch of the Compact Disc (CD) by Sony and Philips. In the late 1970s and early 80s, digital audio reproduction moved on from just a few bits - six for the hi-hat in the Roland TR909 drum machine, eight for the samples in the Fairlight CMI, 13 for the Sony PCM1, to finally land at 16 bits as the standard for home reproduction, and 24 bits for live sound and recording. But why 16? Why 24?
THE COST OF LIVING - DOES 96KHZ MAKE SENSE?
The ‘96K’ debate may have already started when CBS started retailing Billy Joel’s 52nd Street on Compact Disc (CD) in 1982. A few years earlier Philips and Sony, in a rare collaborative mood, decided that the CD standard would have 16 bit words to represent digital audio and that 44100 samples would be used to represent one second of audio - a consensus on trade off between audio quality and storage capacity.
POWER TO THE SPEAKER: THE DAMPING FACTOR
When we speak of audio signals, we usually don't refer to the actual acoustic waves propagating through the air. Instead, we often think of audio as a voltage - with a one volt peak value as the ‘0dBu’ reference, clipping the average analogue ‘line’ signal circuit around +24dBu - measured in volts close to the commonly used balanced power supply of 15v. And when we discuss digital audio, we think of a digital code representing a voltage - with 0dBfs ‘full scale’ representing the maximum peak level.
WHICH AUDIO NETWORK SOUNDS BEST
Skype, Facetime, YouTube… I’m not going to talk about them, though they use networks to carry audio. I’m talking about audio networks for professional audio - you know, CobraNet, EtherSound, Dante, Ravenna, AVB and so on. I can think of at least ten different types of professional audio network that I have used during the last decade. All of them claim to carry uncompressed digital audio around a studio/concert hall/festival site/other entertainment venue. They all have slightly different features and advantages. But which one has the best sound?
Yamaha HISTORY: THE CS SERIES ANALOGUE SYNTHESISERS
This week an odd topic: analogue synthesis. In the 1970s and early 80s, Yamaha was a leading manufacturer of analogue synthesisers. Where the MiniMoog is the archetype of the monophonic (one voice) keyboard synthesiser, the Yamaha CS80 is the undisputed ‘mother of polysynths’, regarded as the most impressive achievement in audio engineering in the seventies.
TIME IS PRECIOUS – WHERE DID THE EXTERNAL CLOCK GO ?
It was a big debate in the previous decennium: using external word clocks to influence a digital audio system’s time accuracy. With the introduction of the ‘Precision Time Protocol’ in gigabit networks, the discussion slowly died out and the choice of system clock to influence audio quality has basically disappeared. Many networked I/O racks don’t even have a word clock BNC connector anymore. What happened ?
SPEAKER MEETS AMPLIFIER - HOW TO SELECT THE RIGHT AMPLIFIER
The invention of the vacuum tube more than 100 years ago made many things possible –radios, televisions, even computers. And, of course, audio power amplifiers to drive the speakers in them… and also professional audio systems. Since then, after the transistor started to replace the vacuum tube in the 1970s, power amplifiers have grown from a few watts for driving small speakers up to several kilowatts driving high-powered line arrays. Since the 1970s, the market for sound reinforcement systems has matured, offering thousands of different loudspeaker cabinets and separately-sold power amplifiers to match any application. But this has introduced a challenge: what power amplifier to select for what speaker?
AUDIO NETWORK BASICS (PART ONE): FIVE DISCUSSION TOPICS
A lot can happen in ten years. If you had been experimenting with the application of network technology in live audio systems back in 2007, you would have been a true pioneer - marketing people would call you an ‘early adopter’. Starting with 100Mb Ethernet technology protocols Cobranet and Ethersound, later introducing proprietary protocols Optocore and Rocknet, the live audio world quickly learned to make use of the exciting possibilities and functionality of network technology. Ten years later, the market adopted gigabit Ethernet networks as a standard - nowadays there’s hardly a professional audio mixer, stage rack or DSP processor that doesn't have an RJ45 connector to exchange audio with the world. Sound engineers learned to use network cables, program switches and design ad-hoc network structures to make their lives easier. This micro-tutorial presents the five most important topics in discussing audio networks.
WHICH DSP CHIP SOUNDS THE BEST ?
Today's professional audio market uses chips made by a handful of digital signal processing (DSP) manufacturers. The most-used chips are made, in alphabetical order, by Analog Devices, Intel, Motorola, Texas Instruments and Yamaha. Over the past three decades, DSP chips have developed from low capacity chips to the advanced 32-bit and higher bitrate systems used in today's processors and mixers, with manufacturers constantly improving performance. This performance is generally indicated by three properties: DSP power, Audio quality and Sound quality.