Networked music performance: An introduction for musicians and educators

September 18, 2020
[Updated to include new software on October 26, 2020]

A photo of a netorked concert, with musicians on a stage joined by others on screens.
A photo of a netorked concert, with musicians on a stage joined by others on screens.
Changing Tides: A Translocational Concert Series, part 3 (2016). UC San Diego stage with Mark Dresser, Nicole Mitchell, Michael Dessen, and Stephanie Richards, along with musicians in Stony Brook, NY (on screen) Denman Maroney, Satoshi Takeishi, Min Xiao-Fen, Marty Ehrlich, Ray Anderson and Sarah Weaver.

Note: A shorter version of this article was also published as guest blog post for SmartMusic.com

Introduction

In 1957, while prohibited from traveling outside the US because of his political activism, Paul Robeson performed on a choral festival in Wales run by a mineworkers union. Yet he did not defy the McCarthyite attack on his freedom by traveling to the UK in person: Instead, Robeson participated from a New York City recording studio, sending his powerful, bass baritone presence over newly laid transatlantic telephone lines.

That concert’s audio quality was low by today’s standards, and the musicians alternated performances since synchronous playing was not possible. But as musicologist Shana Redmond reveals in her vivid narration of the encounter, Robeson and his Welsh collaborators still forged a meaningful connection, enacting a creative response to necessity.

Though rarely mentioned in histories of music technology, Robeson’s performance reminds us that music making across distance is not a new phenomenon. But following many experiments with multi-site music making during the second half of the 20th century, the rapid growth of the internet in the first two decades of the 21st has vastly expanded this idea, connecting artists and researchers across the planet in a collaborative field often called networked music performance.

An important spark for this work in the early 2000s was the emergence of high-bandwidth, fiber-optic networks like US-based Internet2 and similar partners in other countries. These new networks enabled developers to create software for performing music together across different geographic locations with tighter synchrony and higher sound quality than ever before. They also inspired composers and improvisers to create new works — sometimes described as telematic music — that were designed specifically for the networked medium.

Much of this work was limited to campuses that could provide the costly networks and infrastructural support required. In recent years, consumer internet speeds have improved, enabling better results from home. Even so, most musicians, teachers and concert presenters have had little interest in incorporating complicated audio networking software into their daily work.

That changed almost overnight with the coronavirus pandemic. Musicians wanted to play together, educators wanted to run ensemble classes remotely, and questions that were once curiosities suddenly became urgent: What software for low latency performance works best? What equipment is required? Does it work for large groups?

Simple answers are difficult, because what we require to play music together varies across different contexts, and the best solution for a given situation depends on our priorities and resources. In addition, the tools are changing quickly, with a surge of development responding to new demand.

For musicians who want to replicate the tight rhythmic synchrony they experience when playing in the same room, the good news is that under ideal conditions, this is possible via internet, up to roughly 500–600 miles. But what makes for “ideal conditions” is sobering: High quality software is rarely simple to use, and good results require that each person have decent audio equipment and network quality. These are significant hurdles, given the cost of hardware as well as inequities in broadband access in the US.

Still, the best software programs currently available for networked music performance are not only constantly improving, but are also free and open source. This is a crucial point. Like most new technologies, these tools evolve through a feedback loop among countless makers and artists. Rather than wait passively and allow a single commercial product to define the possibilities for us, we should explore them collectively, working across diverse communities. This is especially critical for networking technologies, since they not only depend on, but are fundamentally about our interconnectedness.

This article is aimed at curious musicians and music educators, especially those without technology experience. It covers basic considerations in networked music performance and offers suggestions for teachers, ending with broader ideas on how we can use networks not to replace the cultural work we already do as musicians and educators, but to expand and deepen it.

Latency basics

Under ideal conditions, we can play music several hundred miles apart via internet in tighter rhythmic synchrony than we ever could sitting across a large symphony orchestra.

This sounds absurd, but is true. The reason is that audio data moving across fiber-optic networks can travel at nearly the speed of light, which is much faster than the sound moving through air. Of course, achieving tight synchrony via internet requires work, including attention to aspects of music making that we rarely think about when in the same room. One of those is latency, which in music contexts refers to the time it takes sound to travel.

Latency is always part of how we experience sound, but we notice it only when it gets in the way of what we are trying to do. On Zoom, for example, the one way latency between each person can be a quarter second or more. This is fine for a conversation, but not for many kinds of music.

Any trained musician can see why, by just doing the math. Imagine that you play a sound and it reaches me a quarter second later. I then play with what I hear, and my sound reaches you a quarter second after that. This means that although I experience us playing together, you hear me playing a half second behind you. If we are playing rhythmic music at a tempo of quarter=60bpm (one quarter note per second), I would therefore sound to you an eighth note late. If, instead of playing in time with what I hear, I correct this by playing an eighth note ahead of your beat, we will sound in synchrony to you, but not to me.

This dilemma has nothing to do with the internet, only the latency, which also occurs when sound travels through air. To simulate this same latency we get in a long distance Zoom call, for example, we could simply stand 250 feet (76 meters) apart from one another.

How quickly must the sound travel in order for us to play rhythmic music together in tight synchrony, without either person sensing anything wrong or making timing adjustments? Research suggests the answer is roughly 30 milliseconds or less, one way. This is the latency we experience through air at a distance of roughly 30 feet (9 meters).

Thinking about physical distance equivalencies in this way can help us understand the relationship between latency and music performance with more nuance. For example, imagine standing 60 feet away from someone — a distance we might experience on a large stage — and clapping a steady pulse together. Even though that is twice the 30-foot “ideal” threshold mentioned above, skilled performers might be able to play a loosely synchronized pulse. However, that imprecise synchrony would likely require constant micro-adjustments on their part, calling their attention to the distance.

Now imagine the same 60-foot duo, but with one person on bass and another on drum set, playing a groove at a fast tempo. Since they are negotiating far more detailed subdivisions of the pulse, these musicians will experience the synchrony challenge in an even more heightened way, and they will not be able to perform certain types of music at all.

In fact, these higher levels of latency are common in music making. Orchestras, operas and large stages with complex PA systems are just a few of the contexts where musicians navigate multiple, competing delays among numerous people, well above that 30 millisecond threshold. A marching band, for example, might be spread out across a distance of 100 feet or more, but by placing percussionists in closely formed subgroups, they can better project sensations of synchrony and groove. Musical forms evolve in response to the spaces where they are created, and synchrony is a rich dimension of practice to investigate, even in our offline music making.

Latency in networked music making

When we hear each other through the air, latency is predictable, because we know the speed of sound, and also beyond our control, since other than moving closer together, nothing we can do will reduce it. But when we perform over networks, both of those aspects are reversed: Latency is very difficult to predict, since it depends on numerous factors, and while we can’t control it entirely, our tools and our choices — including complex software settings — make a significant impact.

The first stage of latency in networked music is the time our sound waves take to travel through the air to a microphone, but with close mic placement, this is minimal. Far more important is what happens next in our hardware and software, which convert the sound to digital data, and wrap that data into “packets” to prepare it for its trip across the network. The latency at this stage — in other words, how long this entire process takes — depends on the speed of our hardware and software, and how we use it.

Once these packets of audio data are sent out of our computer, they travel through the “local area network” (LAN) within our home or school, then out into the “wide area network” (WAN) of underground cables between us and our partner. When they reach the other site, they go through the reverse process, moving through our partner’s local network, into their hardware and software, to be converted back to sound waves and reach their ears. If we are connecting with more than one person, this same process happens in multiple directions simultaneously, sometimes through a “server” computer that acts as a central hub, receiving everyone’s audio data then sending it to all the others.

Although we have some control over the speed and “Quality of Service” of our network at home, both depend largely on our Internet Service Provider (ISP). And other than public advocacy work, we have no control at all over what happens in the underground (or even undersea) fiber-optic cables running between our locations. But where we have the most direct ability to control latency is in the hardware and software we use at each location. For this reason, since the pandemic, there has been an explosion of interest in this topic among developers, who are working to improve existing low-latency networking apps and create new ones.

These are exciting developments, but as noted above, we can also take a wider view of musical synchrony. In addition to pushing the possibilities of tightly synchronous performance under the 30 millisecond threshold, many of us involved with this field have also made new kinds of music that approach latency and other aspects of networks as creative challenges. We have performed numerous concerts, including rewarding intercultural collaborations thousands of miles apart, with multi-channel, high quality sound and latency lower than a typical Zoom call. For long distance concerts, with latency in the 100–125 millisecond range, we have explored not only music with a loose pulse or none at all, but also other strategies like multi-tempo or multi-ensemble rhythmic textures.

Such techniques are not unique to networked music; since long before the internet, musicians have been creating innovative rhythmic practices that go beyond conventional metric structures and expand our understanding of time and feel. The telematic stage offers a new perspective on such rhythmically multi-dimensional music, and like any emerging medium, invites us to extend inherited traditions into new discoveries.

Sound quality

All musicians will say that they value high quality sound, but what that means in actual practice varies widely. Because technology often involves tradeoffs, it’s helpful to understand the factors that determine sound quality.

In networked performance, the acoustics of our physical spaces often have only a small impact on sound, if microphones are placed close to each instrument. But the quality of our audio equipment is crucial, and impacts both sound and latency. Today’s computers include tiny built-in microphones and “sound cards” that convert sound to and from digital data. For joining a Zoom call or streaming a video, they offer impressive results for their size, but they typically don’t provide good enough quality or speed for performing music together.

Instead, it is far better to use an (external) audio interface and microphone. Many musicians are unfamiliar with audio interfaces, and the numerous types on the market can be intimidating. But as its name implies, the interface is a central tool in digital music making, bridging sound and machine, and learning even the basic principles can go a long way.

An audio interface takes in sound via a microphone or other “inputs” and converts that sound to digital data for our software to use; likewise, it also takes digital data from our software and converts it to sound, sending it to the “outputs” where we connect our headphones or speakers. Some interfaces do this work faster than others, meaning there is less latency added in the hardware and/or in the accompanying “driver” software (which enables the interface to communicate with the computer). Lower latency interfaces are ideal for networked music making, where even a difference of a few milliseconds can matter.

High quality, low latency, USB-C interfaces are available for under $200, though that is still prohibitive for many people. It may still be worth experimenting with built-in audio or inexpensive USB microphones if nothing else is possible, but a rewarding ensemble experience often requires better equipment. As detailed further below in the comment on Raspberry Pis, many developers are working on this access dilemma by creating options for small, inexpensive hardware devices, and new products along those lines are expected soon.

Your microphone and interface determine the quality of sound you can send to the other musicians, but just as important is how you hear their sounds. For example, using cheap ear buds, you will hear not only poor quality sound from your partner, but a lot of your own acoustic sound through the air, making it impossible to play together. You can also experiment with speakers rather than headphones, but need to place them carefully so they do not send sound directly into your microphone, which will cause your partner to hear their own sound transmitted back to them.

In addition to hardware equipment like interfaces, microphones, headphones and speakers, the quality of our networking software also impacts the sound quality we experience. Software that easily allows each player to set all the players’ volume levels makes a huge difference, just as with individual monitor mixes on stage or in a studio, but this may add complexity or require more bandwidth. And an especially key factor is whether the software uses compressed data formats, which take up less data and therefore work better on slower networks, but also reduce sound quality and timbral subtlety. Some apps are designed to work with uncompressed audio formats, which enable the sound quality of a professional recording studio, but this will only work well with sufficient network speed.

In a similar tradeoff, software that is simpler to use might also offer less flexibility for fine-tuning results based on your situation. For example, an important factor that impacts both sound quality and latency is the size of various software “buffers,” which are somewhat like waiting rooms where audio data packets gather upon arrival, to make a more orderly entrance. If any aspect of our software, hardware or network cannot keep up with processing all the arriving packets, then just as with too many people trying to cram into a full waiting room, some of the packets will be “dropped,” causing glitches in the sound. With sophisticated software, we can increase the buffer size to solve this, but doing so means that the packets spend more time in that larger waiting room, since it takes longer to fill up. So while none are dropped and we now hear good sound, we have also increased the latency (wait time), which might prevent us from playing together in tight synchrony.

Current software options

For all these reasons, getting good quality sound while minimizing latency takes practice and experimentation, and high quality networking apps are rarely a “plug and play” experience. Still, since the pandemic began, more developers than ever have been working to create simpler apps without sacrificing quality. No list is comprehensive and this field is changing quickly, but with those caveats, here are some current software options.

JackTrip is free, open source software developed in the early 2000s by Chris Chafe, Juan Pablo Cáceres and the Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University. It was created specifically for networking uncompressed, low-latency, multichannel audio, and many of us have used it for over a decade to produce concerts large and small, with excellent results. JackTrip itself is not simple to use, but it is now being extended in exciting new directions, including several aimed at increasing ease of use without sacrificing quality. One is the forthcoming Virtual Studio web application, which will enable simple but high quality ensemble performance for those able to purchase the required small devices and accessories (at roughly $200 per player). And another is Miller Puckette’s Quacktrip and Nettie McNetface, an innovative pair of free apps that use the JackTrip protocol, but offer a more streamlined user experience.

[New app added on October 26, 2020:] Sonobus is a new app that combines excellent quality with a user-friendly, well designed interface. It includes a recording feature and offers the option to add compression and adjust audio formats, so that you can adapt it to your network quality. You’ll get optimal results if you can learn enough to navigate those and other settings, but it does not require port forwarding, and is relatively simple to install and use compared to most apps that offer similar quality sound.

Jamulus is another open source program with a large user base that has developed over years. It uses a compressed audio format and is less flexible than JackTrip, but is far easier to install and use, including a simple visual interface and helpful support forum. In addition, whereas JackTrip requires that at least one user (in the “server” role) set up port forwarding on their network, a task not recommended for beginners since it impacts computer security, this step is not required with Jamulus because it uses public servers. The downside of public servers is that anyone might join your “room” and could then hear your Jamulus session, but users with the technical knowledge can set up a private server to avoid this.

SoundJack, developed in Europe by Alexander Carôt, has also been evolving for many years. It is free and the visual interface is well designed for ensembles, though it uses a compressed audio format and is somewhat more complicated to use than Jamulus, including the requirement that you set up port forwarding on your network.

Jamkazam is one of several companies that have marketed a proprietary hardware-software solution, typically a small box with audio inputs and outputs and a simple app for connecting with other users. Several products like this over the past decade have not survived, due to low demand and the challenges of combining ease of use with good results. The user feedback I have heard on Jamkazam is mixed at best, so I cannot recommend it for high quality performance, but it might be worth trying if simplicity is the main priority.

Most of these apps do not include video at all, because video requires even more data than audio, making it very difficult to achieve the same low levels of latency. Some developers have created integrated audio-visual telepresence platforms, though most come with one or more limitations. Still, those on Apple computers might want to try Artsmesh, which has an elegantly designed interface and integrates JackTrip along with high quality video, chat and other networking tools. And if you are fortunate enough to have access to high-speed networks and costly equipment, you can explore LOLA, a high quality, integrated platform developed over many years by researchers in Italy.

As noted earlier, good results usually require a wired (ethernet) connection, but soon we will likely see software better suited for wifi and even 5G cellular networks. Products such as Aloha by Elk are already being advertised during the beta testing stage, but I am not aware of quality options on 5G ready for consumers yet.

It’s worth noting that if tight synchrony is not your main priority, other software might be a better choice. For example, musicians who make loop-based, electronic music might want to explore apps that add latency in order to synchronize players within a shared metric structure, but in different measures. A veteran app in this category is Ninjam, which has a large community of users.

Alternately, for scenarios such as remote, multitrack recording, where exchanging high quality, multi-channel audio is more important than reducing latency, the best choice might be a program like SourceConnect, Cleanfeed, or Audiomovers’ ListenTo. And if you simply want good sound without advanced features or low latency, Zoom recently added a “high fidelity mode,” in response to music school faculty requesting better audio quality for lessons and masterclasses.

Finally, if you want to record, add effects, or stream to an audience while using audio networking software, you will need to route audio into or out of other apps like Digital Audio Workstations (DAWs), streaming encoders or videoconferencing programs. This usually requires a dedicated routing application or a “virtual audio device.” Finding the best routing solution can take trial and error, but fortunately there are numerous apps available, including many free options.

Evaluating scale and complexity

In summer 2020, I was inspired to see many dedicated music teachers learning new tech skills to help their students. There is no substitute for that kind of hands-on experimentation. But to put together the information above and visualize the overall picture, it can help to think of a scenario more familiar to musicians: the recording studio.

Imagine a studio with multiple isolation booths, each of which represents a “site” in your networked environment. If you want to play duo with a friend, you only need two booths, whereas if you want to rehearse an orchestra with 60 musicians, all playing from their home, you need a booth for each one.

In a real-life studio, each booth only needs a mic and headphones, which connect to the outside via cables. But in our imaginary studio, where each booth represents a musician in a different geographic location, each one is completely cut off from the others, with no sight lines, and connected through the internet. In addition, each booth must also have an audio interface, a computer of some kind running audio networking software, a wired internet connection, and if visual contact is needed, such as for a conductor, a videoconferencing app and a camera/screen.

It may seem strange to visualize a warehouse-size recording studio like this, but essentially this is what we are doing in networked music performance, with the added complication that each site is not an iso booth, but someone’s home. Are there other people in the home making noise that will be picked up by the microphone? Is a family member streaming a movie or joining a Zoom call, competing for bandwidth? In a large ensemble context, issues like that at one site might add unwanted noise or distortion that the entire ensemble would hear.

We can also think of a recording studio’s control room to understand the role of a “server.” For a small number of sites, each might connect to the others directly, a mode termed “peer to peer.” But as the number of peers increases, all those connections become complicated to manage, and require more bandwidth (network speed) at each site.

Much like a control room in a recording studio, a central server computer in a networked scenario solves these problems by receiving the data from each of the sites (“clients”), and sending it out to all the others, an arrangement (or “network topology”) described as “client-server” mode. In this arrangement, the server must have enough bandwidth to handle those many streams of incoming and outgoing data, so ideally the server machine would be on a fast, stable network, such as an institution or a cloud hosting service.

Some musicians only want to use networks to rehearse, but if you intend to broadcast the performance for an online (or even in-person) audience, this adds an additional layer to your production. In this case, it’s usually necessary to have someone in the role of a central producer, managing the server. Their role would be to mix all the video and audio in a streaming encoder program — often adding an audio delay at this stage, to sync the sound with video — and then stream it to the final destination.

As noted above, video typically has even more latency than audio, which means that a conductor will have to tolerate a slight delay in the musicians’ responses to her visual gestures, because of the time her video takes to reach the performers. To make an in-person analogy, the conductor’s experience — assuming the ensemble is successfully using low-latency audio software, and video is transmitted through an app like Zoom — would be like having an ensemble that can play in synchrony with one another, but conducting them from 200 feet or more away.

Suggestions for educators

Music teachers seeking a magical app to seamlessly run ensemble classes might find these requirements discouraging, but even within constraints there are many possibilities to explore. For teachers willing to experiment or planning for the long term, here are some suggestions.

Start small. This doesn’t mean giving up forever on the idea of remotely rehearsing ensembles, but simply having realistic expectations and goals. Setting up a class so that its success depends on a new or complex technology performing flawlessly is an invitation for disaster. Far more effective is to integrate flexible pilot tests in ways that make technological learning one of the goals of the course, rather than merely serving it. Are there any students — a single duo, or a small subset of the group — who might be motivated to do a self-guided study, to learn enough to start helping others? Realistic course designs can help us avoid frustration and build capacity for long term work.

Be friendly to your IT staff. Each situation varies in the details, but you will need their help. Technology staff at many institutions may be reluctant or unable to support emerging software, and can rarely be responsible for all the aspects involved in this work. However, most IT staff are also eager to help teachers explore innovative solutions, and are often crucial for configuring networks and servers. Especially for teachers without much technology experience, the best approach is to coordinate closely with tech staff from the beginning, deciding together what experiments could be possible.

Teach students (and ourselves) not just how to use technology, but how to learn it. The apps and hardware we use today will change soon, but not the underlying principles. Even instructors who lack the experience to teach students directly about technology can still learn alongside them. Learning how to detail a tech problem and write a truly effective help request is one of the most valuable skills for students to learn, far more useful than knowing how to use any particular app.

Apply your musical knowledge of how to practice. Some musicians and teachers feel insecure about engaging with students around technology, but it can help to remember the skills we bring as musicians, even if we normally use them in a different domain. Performing an instrument at a high level is an iterative practice we develop over many years, with multiple layers. To improve any aspect, we analyze what is happening in the mind, body, and instrument, then try new approaches, and repeat. Despite the different background knowledge required, working in a networked performance environment is a similar practice of analyzing a problem, experimenting with different choices, and studying the results. I have often seen musicians with no prior tech experience learn to dissect technology problems effectively by drawing on these skills that they developed through years of musical practice.

Make collaborative skills part of what we teach and learn with our students. In many large ensemble settings, players must precisely coordinate their actions with one another and the conductor, but have no freedom to improvise or choose what to play; in other contexts like small group, improvisatory forms, each player must contribute compositional choices and navigate highly collective decisions. But whatever type and degree of collaboration is part of your aesthetic, networked music offers an opportunity to expand it. This work is inherently collaborative, and communication skills are critical, such as when we have to coordinate troubleshooting across multiple sites to discover where a problem is even located.

Explore new musical possibilities. Even when tight synchrony is out of reach, we can still explore ways of making music together via internet, with real time interaction. When the pandemic began, I heard from a colleague in Bogotá whose college jazz improvisation class was forced to finish the semester on Zoom. Rather than give up on playing together, the students developed a series of open improvisation exercises intended to work within Zoom’s limitations, testing the techniques and documenting them in a pamphlet to encourage others. Improvisation is increasingly being integrated into conventional music education programs and can offer the flexibility needed to respond creatively to constraints, while empowering students to experiment and collaborate.

Networks for social transformation

My final message to educators is a broader invitation: Use networks to extend in new ways on the cultural work we do as artists and educators, including collaborations that expand students’ learning communities and contribute to progressive social change.

A university professor recently told me that almost none of their jazz piano majors had a piano on which to practice, and most could not afford even an inexpensive electric keyboard, making progress towards their performance degree impossible. Yet in speaking to a colleague, this professor discovered that most classical piano majors in the same department had grand pianos in their homes.

However one interprets anecdotal patterns like this, the fact remains that the pandemic’s impact on both students and the US population is more severe for already disadvantaged groups, particularly people of color and low income communities. Many high school and college music programs provide students with not only knowledge and training, but also the spaces and instruments needed for their studies. With campus closures, this crucial role and so many others played by schools have instantly been stripped away.

Addressing unequal access within our educational system demands sustained, collective effort, and it is easy to think of that work as being located elsewhere, beyond our classrooms and expertise. But the choices we make always matter, and networks offer new possibilities that educators are only beginning to explore. Here are a few examples of ways we might integrate networked music performance and other high quality telepresence tools into our teaching and institutions.

Facilitate collaborative projects among schools and students with differential resources. Regional “all-state” band and orchestra events are a powerful way to bring students together around a shared commitment to musical training. These are often important cultural experiences for students, enabling them to interact with peers from different backgrounds. How could we use networks to extend this idea beyond a yearly weekend workshop? Could educators from under-served and wealthier school districts collaborate on a grant for equipment and tech support, to enable students to work in cross-regional chamber ensembles throughout the year? Such projects could include guest composers or coaches, original works created collaboratively by students, and in-person concerts. For students who live in the same region but experience it in different ways due to differences in race and class, collaborating closely over an extended time can be deeply transformative.

Expand on mentor programs that pair college music students with under-served communities near and far. Numerous classical music organizations have formed in the US with a dual social and musical mission, often inspired by initiatives like Venezuela’s “El Sistema,” and high school music programs often serve a similar function. In many cases, including a wonderful partnership that one of my own colleagues established with a nearby high school, university students provide lessons and ensemble coaching, while gaining teaching experience and expanding their own social world. Traveling to one another’s spaces in the literal sense is a crucial part of such programs, but networks can supplement this in important ways. For example, by connecting a high school and a college music department via a small, dedicated lab at each site with ready-to-use, high quality telepresence tools, students and mentors could use regular networked sessions to further deepen what they do when in person, when distance or logistics make routine travel difficult.

Use intercultural collaborations to expand students’ world views and form new musical partnerships. Projects linking musicians across cultures and continents are difficult but rewarding, and have been a central part of networked music performance from the start. In music education, this is an under-explored area that could grow in exciting ways as networks and tools improve. As a small example, I was inspired by the students in a networked course that I co-led with colleagues in Manizales, Colombia, linking high school music students there with Latino peers in California. Using our department’s music technology labs, the bilingual class enabled students to perform together, to create collaborative sound collages, and to learn more about one another’s cultures and life experiences, while also giving many “first-generation” students the opportunity to work on a university campus.

Of course, expanding such ideas from small pilot projects into long term, sustainable partnerships is hard work. Beyond simply purchasing new apps and machines or submitting grant proposals, it requires learning new teaching skills, taking risks and developing collective vision. And while many of these projects are great opportunities for funding and support, we must look beyond simplistic, techno-utopian hype as online teaching becomes more normalized, pushed on us by both cost-cutting measures and tech companies eager for new markets. Writing about what she terms the new “pandemic shock doctrine,” including efforts by corporations and institutions that prioritize profits over democracy, Naomi Klein astutely warns that “tech provides us with powerful tools, but not every solution is technological.” Regardless of their politics, most teachers I know intuitively understand this point.

In this same spirit, what many of us have sought over years of networked music making is not fundamentally about the spectacle of advanced technology, but about musical community. Today’s global, fiber-optic networks offer infinite creative potentials for musicians and teachers, as modeled by pioneering visionaries like Pauline Oliveros (1932–2016) and Geri Allen (1957–2017), along with so many others continuing that work now. Even if our social, environmental and political challenges are unprecedented in scale, so are our tools. Perhaps what we do with networks and music now is a creative challenge we have been training for all along.

Thanks to the generous collaborators whose work in telematic music has inspired me over many years, far too many to list but especially Mark Dresser, Nicole Mitchell, Myra Melford, Chris Chafe, Sarah Weaver, George E. Lewis, Trevor Henthorn, Mario Humberto Valencia, Matthias Ziegler, Jun Oh, Juan David Rubio, Tata Ceballos, Yoon Jeong Heo, Joshua White, Shahrokh Yadegari and Jason Robinson, along with numerous others.

I’m a musician and a professor at the University of California, Irvine. For more, please visit www.mdessen.com

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store