Areas of Study

Menu × Close

Breadcrumb Navigation

Music & Technology


A Collaborative Approach to the Future of Music

The Master of Science in Music and Technology gives students the freedom to push the boundaries of their expertise. Students accepted into this program have presented substantial work outside the typical four-year-degree, and excel in music or some aspect of technology, demonstrating a aptitude and desire to explore a specialized area with significant depth.

Students at the graduate level are able to hone their skills in an interdisciplinary environment, focusing on a chosen area of study, such as Recording Technology, Audio Engineering, Computer Music, Music Composition, Music Performance, and Music Theory. Our expert faculty promotes a collaborative approach to cutting-edge education that gives our students both the specialized knowledge and breadth of skills to foster development in the field of music.

With a customizable wealth of classes to choose from, students are able to explore new educational directions while honing their current talent, and are expected to produce original work through a public performance and written thesis.

As the background and needs of students are highly varied, specific course selection is supervised by the student's advisor, working in concert with an Advising Committee composed of representatives from the School of Music, the School of Computer Science, and the Department of Electrical and Computer Engineering. Working closely with expert faculty, previous students have pursued study in technologically augmented performance; computer music systems and technology; music signal processing; acoustics, sound recording, and music instrument design; and music cognition and perception.

The program consists of a set of courses that span both music and technology, as well as a comprehensive capstone composition/design/performance project.  As the backgrounds and needs of the students are expected to be highly varied, specific course selection will be supervised by the students in concert with an Advising Committee which will include representation from the School of Music, the School of Computer Science, and the Department of Electrical and Computer Engineering. Potential areas of study include technologically assisted composition, technologically augmented performance; computer music systems and technology; music signal processing; music information retrieval; acoustics, sound recording, and music instrument design; and music cognition and perception.

The following is an outline of the respective competencies, which must be developed by the successful candidate for the degree:

Electrical Engineering Emphasis

  • Basic knowledge or competency in music history, keyboard, and music theory
  • Thorough knowledge of electronic devices and analog circuits
  • Thorough knowledge of structure and design of digital systems
  • Working knowledge of recording, editing, and mastering of software and skills (ProTools)
  • Working knowledge of a professional recording studio

Computer Science Emphasis

  • Basic knowledge or competency in music history, keyboard, and music theory
  • Knowledge of parallel and sequential data structures and algorithms
  • Knowledge of computer music systems
  • Working knowledge of recording, editing, and mastering of software and skills (ProTools)

Music Emphasis

  • Competency in performance or composition–conservatory level
  • Thorough knowledge or competency in music history, keyboard, harmony, eurythmics, theory, and solfege
  • Basic knowledge of electronic devices and analog circuits
  • Basic knowledge of first-level computer programming courses
  • Working knowledge of recording, editing, and mastering of software and skills (ProTools)
  • Working knowledge of a professional recording studio

Master of Science in Music and Technology Curriculum

Core Courses 60 units
A specific set of core courses will be identified by the Graduate Advisory Committee in consultation with each student on the basis of his or her background and experience. At least 24 units will be courses in the School of Music and at least 24 units will be courses in Computer Science or Electrical and Computer Engineering.  Courses fulfilling this requirement include but are not limited to the courses listed below.  Core courses and support courses may include thesis research credits (i.e. 15-571/15-572 Music & Technology Project).

Support Courses 36 units

Additional courses will be chosen by the student.  A graduate student should not repeat courses previously taken as an undergraduate student at Carnegie Mellon or elsewhere. Courses fulfilling this requirement include but are not limited to the courses listed below.  Core courses and support courses may include thesis research credits (i.e. 15-571/15-572 Music & Technology Project).

Performance/Capstone Thesis 18 units
57-971 Performance/Thesis 18 units

Music and Technology Seminar4 units
57-970 Music and Technology Seminar
57-970 Music and Technology Seminar
57-970 Music and Technology Seminar
57-970 Music and Technology Seminar

Elective Courses 26 units


M.S. in Music and Technology Courses
This is not a complete list of options. Masters students are encouraged to take courses in Music, Computer Science, and Electrical Engineering and any other department that are not specifically Music and Technology courses. For example, there are several excellent graduate courses on Machine Learning offered by various departments at Carnegie Mellon. Any of these courses can be taken, even though they are not listed here. Please see the Undergraduate Catalog for a complete undergraduate course listing. Courses, including graduate courses, are listed in the University Schedule of Classes (with links to short course descriptions). Your advisory committee will help you to select courses.

Computer Music Systems and Technology
    15-322 Introduction to Computer Music
    15-323 Computer Music Systems and Information Processing
    60-439 Advanced SIS: Hybrid Instrument Building 

Signal Processing
    18-290 Signals and Systems
    18-491 Digital Signal Processing
    18-551 Digital Communication and Signal Processing System Design
    18-792 Advanced Digital Signal Processing
    18-798 Image, Video, and Multimedia 

Music Information Retrieval
    11-755 Machine Learning for Signal Processing
    15-826 Multimedia Databases and Data Mining

Machine Learning 
    10-601 or 10-701 Machine Learning
    10-705 Intermediate Statistics

Acoustics/Recording/Instrument Design 
    18-493 Electro-acoustics
    57-947 Sound Recording
    57-948 Editing and Mastering
    57-949 Multi-track Recording
    48-726 Acoustics and Lighting
Music Cognition / Perception
    85-756 (Graduate) Music and Mind: The Cognitive Neuroscience of Sound
    85-785 Auditory Perception: Sense of Sound
    57-377 Psychology of Music

Music Theory 
    57-441 Analysis of 19th Century Music
    57-442 Analytical Techniques
    57-430 Music of Iran
    57-605 Theory and Analysis for Graduate Students
    57-760 Schenkerian Analysis
    57-934 Advanced Analytic Techniques
    57-968 Post-tonal Theory and Analysis
    57-954 Shaping Time in Performance 

Music History 
    57-606 Music History for Graduate Students 1
    57-609 Music History for Graduate Students 2
    57-209 The Beatles
    79-345 The Roots of Rock and Roll 

    57-721 Major Studio (Composition)
    57-258 20th and 21st Century Techniques
    57-27x Orchestration
    57-969 (Graduate) Score Reading/Keyboard Harmony
    57-xxx Technologically-assisted performance independent study

In addition, many of our masters students take undergraduate courses to strengthen their knowledge in areas where they do not already have a strong background. See the B.S. in Music and Technology Curriculum for suggestions.

Carnegie Mellon courses are measured in units rather than credits or credit hours, with three units equaling a standard credit. More information here

General Schedule and Important Milestones

The nominal duration of the degree program for the Master of Science in Music and Technology degree is 21 months or 4 semesters, starting late August. Graduation is in May. Exact dates are available by consulting the Carnegie Mellon University Academic Calendar.


Early August:

Select and register courses


Apr 30:

Thesis topic decided.

Write a one-page description of your topic.

Choose thesis committee.

The thesis committee should consist of at least two people: your advisor and one other member of the Carnegie Mellon faculty or staff. To select the second member of your committee, you should first consult your advisor and get approval of one or more candidates. Then, you should ask the candidate to be on your committee.


Early August:

Select and register courses. Remember to sign up for reading and research to allow time for your thesis project.

Summer and early September:

Prepare a thesis proposal of about 2 pages.The proposal should include:

  • Introduction
  • Review of the state of the art and related work
  • What knowledge and/or science is missing?
  • What will you do?
  • How will you evaluate your work?
  • What are the criteria for successful completion?

Sep 30:

Oral thesis proposal given in Music and Technology Seminar.

The committee in consultation with other faculty will decide to pass or fail the thesis proposal. If the proposal is not passed, the student must address the problems and present another proposal.


Jan 1:

Start writing thesis (if not already started).

Mar 1:

Finish thesis project.

Mar 15:

Completed thesis delivered to advisor.

Mar 25:

Make final revisions to thesis.

Apr 1:

Final thesis draft to committee.


Apr 1 - May 1:

Further editing and committee approval of changes.

May 1:

Master's defense should be complete by this date. A defense consists of a Master's oral presentation and a Master's recital.

The Master's oral presentation is a technical talk similar to a conference presentation. The talk should be carefully prepared and supported by slides with appropriate graphs and equations. If possible, the talk should include sound and/or video examples.

The Master's recital should ideally be a concert or recital, possibly a joint recital or even one piece on a longer program. Alternatively, the oral presentation and recital can be combined. The music should be professional in quality and relate to the thesis. The Master's candidate need not be the performer or composer provided that the thesis results are used in the music composition or performance.

A more technical thesis may not result in music appropriate for a concert. Although a music performance in a recital is the ideal, the recital requirement can be satisfied by a musical demonstration given as part of the oral presentation with the approval of the thesis committee.

The committee can either pass or fail the thesis oral presentation and recital. The committee can also request further changes in the thesis.


Dr. Richard Randall

Cooper-Siegel Associate Professor of Music Theory

View Info

Richard Randall is the Cooper-Siegel Associate Professor of Music Theory at the Carnegie Mellon University School of Music. Randall holds a faculty appointment at the Center for the Neural Basis of Cognition and is a researcher at CMU's Scientific Imaging and Brain Research Center. He received his PhD in Music Theory in 2006 from the Eastman School of Music of the University of Rochester.

Randall's research lies at the intersection of music theory, cognitive psychology, and media and cultural studies. His work employs a wide range of investigative methods in an attempt to better understand what music is and why it is important.  He directs the Music Cognition Lab and co-directs the Listening Spaces Project.

His lab investigates the neuroscientific basis of music perception and cognition.  Focusing on how musicality is perceptual property that auditory objects, his lab uses fMRI to identify neural correlates of how musicality is modulated by changes in low-level acoustic organizational features.

Listening Spaces frames music as an essential human activity and seeks to understand the overwhelming impact technology has had on our collective and personal musical interactions. Their forthcoming book, 21st Century Perspectives on Music, Technology, and Culture, critiques current digital-music practices, how musical activities are commodified, and their social meaning. Listening Spaces also partners with local musicians, community organizers, and Pittsburgh schools to create the Pittonkatonk May-Day Music Festival and Workshop, which seeks to transcend traditional political economies of musician and audience and create socially engaged and sustainable musical events supported by vested community collaborators.


Riccardo Schulz

Teaching Professor, Director of Recording Activities

View Info

Riccardo Schulz is Teaching Professor in the School of Music at Carnegie Mellon, where he teaches Sound Recording and runs the recording operations. His special interest is in recording, editing, and mastering classical music. For three years he was head of the Edgar Stanton Audio Recording Institute (ESARI) for the summer program of the Aspen Music Festival and School. 

Riccardo has recorded and/or produced more than a hundred compact discs on a variety of record labels, including Élan, New Albion, Mode Records, Ocean Records, Norvard, and New World Records. He has also recorded and/or mastered CDs of world music, jazz, alternative rock groups, and selected hip-hop artists. Groups and individuals he has collaborated with include Cuarteto Latinoamericano, Andrés Cárdenes and Luz Manríquez; conductors Denis Colwell and the River City Brass Band, Keith Lockhart and the Cincinnati Chamber Orchestra, Eduardo Alonso-Crespo and the Tucumán Chamber Orchestra, Rachael Worby and the Wheeling Symphony Orchestra, Juan Pablo Izquierdo and the Carnegie Mellon Philharmonic, Robert Page and the Mendelssohn Choir; Andrés Cárdenes and the Pittsburgh Symnphony Chamber Orchestra; Chatham Baroque; pianists Laura Opedisano, Aki Takahashi, and Barbara Nissman; santur player Dariush Saghafi; guitarist Manuel Barrueco, composers Iannis Xenakis, Reza Vali, Nancy Galbraith, David Stock, Ricardo Lorenz, Julián Orbón, and Leonardo Balada; mezzo-soprano Vivica Genaux; baritone Sebastian Catana; tenor Arturo Martín.

Riccardo’s recording of Inca Dances by Gabriela Lena Frank and featuring Cuarteto Latinoamericano and guitarist Manuel Barrueco, received a Latin GRAMMY Award in 2009 for Best Classical Contemporary Compostion.

Riccardo’s non-classical recording credits include the rock group The Syndeys and The Glass Cube; hip-hop artists Freestyle, Unknown Prose, Lil ’Toine, E-Nyse, Charon Don and D. J. Huggy; and jazz artists Alton Merrell, Nathan Davis, Roger Humphries, Bobby Negri, Dave Pellow, James Johnson Jr, and others.

Riccardo has co-produced CDs with Carnegie Mellon students Steven Goldberg, Anna Vogelzang, Tate Olsen, Michael Kooman, Jeffrey Grossman, Ali Spagnola, Ariel Winters, Friedrich Myers, Justin Bishop, Greg Runco, Andy Jih, Haseeb Qureshi, Gabriel Cuthbert, Derek Pendergrass, Joshua Hailpern, Fumiya Yamamoto, Enoma Oviasu, John O’Hallaron, and others. He also oversees recordings with participants in the Arts Greenhouse project, a community-oriented hip-hop workshop for teenagers.

Riccardo also edits and masters the full season of Pittsburgh Symphony Orchestra performances in conjunction with WQED-FM for local and national radio broadcast, and is in his twenty-third year of recording and editing performances of the Pittsburgh Opera for radio broadcast.

With Carnegie Mellon alumnus Alex Geis, Riccardo has developed the Webcast project and the Destination website for the Carnegie Mellon School of Music, the first music conservatory in the world to offer live Internet broadcast of student recitals and ensemble concerts.

Riccardo has master's degrees in mathematics from Duquesne University and musicology from the University of Pittsburgh. He speaks Italian, and for several years was assistant accompanist for singers with the EPCASO program in Oderzo, Italy. He is former program annotator for the Y-Music Series, and former music critic for WQED-FM's Sunday Arts Magazine. 

Riccardo lives happily in Pittsburgh without a cellphone or a television, and has been a vegetarian for longer than anyone can remember.


Dr. Richard Stern

Professor of Electrical Engineering

View Info

Most current speech recognition systems do not yet perform well in difficult acoustical environments, or in different environments from the ones in which they had been trained. This research is concerned with improving the robustness of SPHINX, Carnegie Mellons large-vocabulary continuous-speech recognition system, with respect to acoustical distortion resulting from sources such as background noise, competing talkers, change of microphone, and room reverberation. Several different strategies are being used to address these problems. These include: improved noise cancellation and speech normalization methods, the use of representations of the speech waveform that are based on the processing of sounds by the human auditory system, and the use of array-processing techniques to improve the signal-to-noise ratio of the speech that is input to the system.
Signal Processing in the Auditory System

This research includes both psychoacoustical measurements to determine how we hear complex sounds, and the development of mathematical models that use optimal communication theory to relate the results of these experiments to the neural coding of sounds by the auditory system. Much of this work has been concerned with the localization of sound and other aspects of binaural perception.


Jesse Stiles

Assistant Teaching Professor of Sound Media

View Info

Jesse Stiles (b. 1978, Boston, MA) is an electronic composer, performer, installation artist, and software designer.  Stiles’ work has been featured at internationally recognized institutions including the Smithsonian American Art Museum, Lincoln Center, the Whitney Museum of American Art, and the Park Avenue Armory.  Stiles has appeared multiple times at Carnegie Hall, performing as a soloist with electronic instruments.  

In his music and artwork, Stiles creates immersive sonic and visual environments that encourage new methods of listening and looking.  His musical output ranges from highly experimental, using texture and spatialization to create abstract clouds of sound, to borderline danceable, exploring the sounds of electronic dance and rock music to create avant-garde performances and recordings.  Stiles’ installation artwork makes use of generative algorithms to control sound, video, light, and robotics - combining these mediums to create synaesthetic compositions that transform museums and galleries into evolving audiovisual environments.

Stiles has collaborated with many leading figures in experimental music including Pauline Oliveros, Meredith Monk, David Behrman, and Morton Subotnick.  He has been featured as a soloist with the San Francisco Symphony and the New World Symphony, performing with electronic instruments.  Stiles' recordings have been published by Conrex Records, Specific Recordings, Gagarin Records, and Araca Recs.  Stiles has worked as a sound designer and composer on a wide variety of award-winning films, museum exhibitions, and video games. 

Starting in 2010, Stiles served as the Music Supervisor for the Merce Cunningham Dance Company.  Working with the company during their precedent-setting "Legacy Tour," he produced and performed in more than 200 concerts featuring compositions by groundbreaking composers including John Cage, David Tudor, Brian Eno, Radiohead, Sigur Ros, and John Paul Jones.  Stiles' compositions were featured in many of the company's site-specific "Event" performances.  

Stiles is currently a Professor in the School of Music at Carnegie Mellon University, where he leads courses on emerging music technologies.

Jesse Stiles' CV is available here.


Dr. Thomas Sullivan

Associate Teaching Professor in Electrical and Computer Engineering

View Info

Research Interests

Though there is currently no funded research at CMU in this area, Dr. Sullivan's interests lie in the areas of signal processing for audio and music systems.

Audio Signal Processing

As the professional recording industry has grown, so has the complexity and quality of sound recording equipment. Research in audio signal processing serves the advancement of digital audio recording. From the need for lossless data compression, to higher quality filtering for A/D and D/A conversion, to better error correction coding for digital hard disk and magnetic tape systems (and compact discs), the research areas where electrical engineers can aid the entertainment industry are great.
Music Signal Processing

Signals from musical instruments are very complex waveforms. As the professional recording and performance industries demand higher quality synthesis of existing musical instruments, the study of new methods of instrument synthesis has increasing importance. In addition, the greater quality of films and television have increased the need for more realistic generation of sound effects. The use of digital sampling in the creation of music and sound effects merges the music and professional audio signal processing areas.

In addition, there is increasing desire for the control of music synthesizers by other existing musical instruments and new, non-standard "instruments" or "controllers". Pitch and expressive tracking of these instruments are vital to obtaining information from a performer that is capable of giving the performer high-level control over a music synthesizer.