What's new
Audio.Effects.webp.jpg

Audio effects are used pervasively in music performance and production, and the creation of new digital audio tools is a significant industry. They are also commonly used in game production, broadcasting, and film and television.

This book is intended as an educational textbook on audio effects, with relevance to audio signal processing, music informatics, sound engineering, and related topics.

It provides fundamental background information on digital signal processing, focusing on audio-specific aspects. This constitutes the building block on which audio effects are developed. It integrates theory and practice, relating technical implementation to musical implications.

  • This book can be used to gain an understanding of the operation of existing audio effects or to create new ones.
  • It includes detailed coverage of common audio effects (and plenty of unusual ones).
  • It includes discussion of current digital audio standards, likeVST. Accompanying source code is provided in C/C++, for audio plugin development using JUCE.
  • Each section of the book also has examples, exercises, sound samples, lecture slides, and additional information on related topics.
This second edition includes revised and expanded chapters, with new content on equalization, distortion, phase vocoders, and reverberation. It also has new chapters on immersive audio and on advanced concepts in audio programming, revised source code using the latest version of JUCE, and corrections and improvements throughout.

Preface

Audio effects are used in broadcasting, television, film, games, and music production. Once they were used primarily to enhance a recording and to correct artifacts in the production process, but now they are used creatively and pervasively.

This book aims to describe the theory behind the effects, to explain how they can be implemented, and to illustrate many ways in which they can be used. The concepts covered in this book have relevance to sound engineering, digital signal processing, acoustics, audio signal processing, music informatics, and related topics.

Both authors have taught courses on this subject. We were aware of excellent texts on the use of audio effects, especially for mixing and music production. We also knew excellent reference material for audio signal processing and for audio effect research. But it was still challenging to find the right material that teaches the reader, from the ground up, how and why to create audio effects and how they are used.

That is the purpose of this book. It provides students and researchers with knowledge of how to use the tools and the basics of how they work, as well as how to create them. It is primarily educational and geared toward undergraduate and master’s level students, though it can also serve as a reference for practitioners and researchers. It explains how sounds can be processed and modified by mathematical or computer algorithms. It teaches the theory and principles behind the full range of audio effects and provides the reader with an understanding of how to analyze, implement, and use them.

We chose not to shy away from presenting the math and science behind the implementations and applications. Thus, it is one of the few resources for use in the classroom with a mathematical and technical approach to audio effects. It provides a detailed overview of audio effects and includes example questions to aid in learning and understanding. It has a special focus on programming and implementation with industry standards and provides source code for generating plug‑in versions of many of the effects.

Chapter 1 begins by covering some fundamental concepts used often in later chapters. It also introduces the notation that we use throughout. Here, we describe some essential concepts from digital signal processing, thus allowing the subject matter to be mostly self‑contained, without the reader needing to consult other texts.

Chapter 2 is about how to build the audio effects as software plug‑ins. We focused on the C++ VST format, which is probably the most popular standard and available for most platforms and hosts. This chapter (and to some extent, Chapters 13 and 14) may be read at any point or independently of the others. It makes reference to the effects discussed previously, but the chapter is focused on practical implementation. It complements the supplementary material, which includes source code that may be used to build VST plug‑ins for a large number of effects described in this book.

In Chapter 3, we introduce delay lines and related effects such as delay, vibrato, chorus, and flanging. These are some of the most basic effects, and the concept of delay lines is useful for understanding implementations of the effects introduced in later sections.

Chapter 4 then covers filter fundamentals. We chose a quite general approach here and introduce techniques that allow the reader to construct a wide variety of high‑order filters. Attention is also paid to some additional filters often used in other effects, such as the all‑pass filter and the exponential moving average.

In Chapter 5, we explore filters in more detail, covering effects that have filters as their essential components. These include the graphic and parametric equalizer, wah‑wah, and phaser.

We then move on to nonlinear effects. Chapter 6 discusses modulation, focusing primarily on tremolo and ring modulation. Chapter 7 goes into detail on dynamics processing, especially the dynamic range compressor and the noise gate. Here, much emphasis is given on correct implementation and perceptual qualities of these effects. Chapter 8 then covers distortion effects. These are concerned with the sounds that result from highly nonlinear processing beyond the dynamics processors of the previous chapter.

Having introduced the important signal‑processing concepts, we can now move on to the phase vocoder and introduce several effects that do their processing in the frequency domain. This is the focus of Chapter 9.

Up to this point, none of the effects attempted to recreate how a natural sound might be perceived by a human listener in a real acoustic space. The next three chapters deal with spatial sound reproduction and spatial sound phenomena. In Chapter 10, we cover reverberation, paying particular emphasis on both algorithmic and convolutional approaches to artificial reverberation. Though grouped together with the other chapters concerned with spatial sound, the reverberation approaches described here do not necessarily require the processing of two or more channels of audio.

Chapter 11 covers many of the main spatialization techniques, starting with panning and precedence as can be used in stereo positioning and then moving on to techniques requiring more and more channels: vector-based amplitude panning, ambisonics, and wavefield synthesis. The final technique describes binaural sound reproduction using HRTFs, for listening with headphones.

Chapter 12 covers the rapidly growing field of immersive audio. Immersive audio is not well‑defined, but often allows both the listener and all sources to be moving in any direction from arbitrary positions, with arbitrary orientations, as well as to define directional properties of the sources, how sound levels decay with distance and how the spatial sound is finally rendered. It aims to fully immerse the listener in a sonic environment. Here, an immersive audio effect is constructed, where sounds are rendered, given full knowledge of the environment. We also show how the Doppler effect, a physical phenomenon associated with moving sources or moving listeners, is derived and simulated.

Chapter 13 is about audio production. This is of course a very broad area, so we focus on the architecture of mixing consoles and digital audio workstations, and how the effects we’ve described may be used in these devices. We then discuss how to order and combine the audio effects in order to accomplish various production challenges.

Chapter 14 picks up where Chapter 2 left off in exploring principles and practical strategies for implementing audio effects in code. This chapter examines several aspects of evaluating and improving the computational performance of audio effect plug‑ins, including computational efficiency, memory footprint, and latency. It also introduces why, where, and how to use multi‑threading in audio code, where some functions run asynchronously to the main audio processing thread. Finally, this chapter introduces the implementation of audio effects on embedded systems such as microcontrollers and single‑board computers.
Author
Recource Bot
Downloads
0
Views
3
First release
Last update

Ratings

0.00 star(s) 0 ratings

More resources from Recource Bot

Welcome!

By registering with us, you'll be able to discuss, share and private message with other members of our community.

SignUp Now!
Back
Top