Preparing Instruments


Before a task can do any work with instruments, it must first load an instrument template from disc to RAM. It must then create one or more instruments from the definition in the template, allocating necessary resources on the DSP and elsewhere so that the instruments can run.

Instrument Templates

An instrument template defines the qualities of an instrument-how many inputs, knobs, and outputs it has; what signals it generates; how it passes input signals to outputs; and so on. Instrument templates are currently designed to use the 3DO (DSP). Each instrument template is, in fact, a DSP program. Future templates may be designed to run on other audio devices.

Portfolio comes with a set of predefined instrument templates that are stored on the system disc in the audio directory. These default templates are described individually in Instrument Templates, in the 3DO Music and Audio Programmer's Reference. Because these instruments are all designed to run on the DSP, their names all end with the extension .dsp.

A task can create a surprisingly large variety of instrumental sounds using the default instrument templates. To do so, the task creates different instruments, connects them so that one instrument can process the output of another, and then sets knobs accordingly. If you want more variety within a single instrument, you can create your own instrument templates using the development tool ARIA. Custom instrument templates can be stored wherever you wish.

Types of Instruments

The types of instruments defined in the Portfolio default instrument set fall into several categories:

Sampled Sound Instruments. The largest number of DSP instruments are sampled sound instruments, which can use a variety of techniques to play back sampled sound tables stored in the AIFC or AIFF formats (which can be created by most commercial sound editing programs). Sampled sound instruments play back 8- and 16-bit sampled sounds that are either compressed (square/xact/delta) or uncompressed (literal). Table 1 shows the sampled sound instruments currently available in the Audio Folio and lists their playback characteristics.

Table 1:  Sampled sound instruments. 
-----------------------------------------------------------------------
Instrument Name       |Sample |Sample Storage    |Playback   |Stereo/Mo
                      |Size   |Format            |Sample Rate|no       
-----------------------------------------------------------------------
sampler.dsp           |16-bit |Literal           |Variable   |Mono     
-----------------------------------------------------------------------
samplerenv.dsp        |16-bit |                  |Variable   |         
-----------------------------------------------------------------------
samplermod.dsp        |16-bit |                  |Variable   |         
-----------------------------------------------------------------------
varmono8.dsp          |8-bit  |Literal           |Variable   |Mono     
-----------------------------------------------------------------------
varmono8_s.dsp        |8-bit  |                  |Variable   |Mono     
-----------------------------------------------------------------------
varmono16.dsp         |16-bit |Literal           |Variable   |Mono     
-----------------------------------------------------------------------
fixedmonosample.dsp   |16-bit |Literal           |44100 Hz   |Mono     
-----------------------------------------------------------------------
fixedmono8.dsp        |8-bit  |Literal           |44100 Hz   |Mono     
-----------------------------------------------------------------------
fixedstereosample.dsp |16-bit |Literal           |44100 Hz   |Stereo   
-----------------------------------------------------------------------
fixedstereo16swap.dsp |16-bit |Literal (little   |44100 Hz   |Stereo   
                      |       |endian)           |           |         
-----------------------------------------------------------------------
fixedstereo8.dsp      |8-bit  |                  |44100Hz    |Stereo   
-----------------------------------------------------------------------
halfmonosample.dsp    |16-bit |Literal           |22050 Hz   |Mono     
-----------------------------------------------------------------------
halfmono8.dsp         |8-bit  |Literal           |22050 Hz   |Mono     
-----------------------------------------------------------------------
halfstereo8.dsp       |8-bit  |Literal           |22050 Hz   |Stereo   
-----------------------------------------------------------------------
halfstereosample.dsp  |16-bit |                  |22050Hz    |Stereo   
-----------------------------------------------------------------------
dcsqxdmono.dsp        |8-bit  |SQXD 2:1          |44100 Hz   |Mono     
-----------------------------------------------------------------------
dcsqxdstereo.dsp      |8-bit  |SQXD 2:1          |44100 Hz   |Stereo   
-----------------------------------------------------------------------
dcsqxdhalfmono.dsp    |8-bit  |SQXD 2:1          |22050 Hz   |Mono     
-----------------------------------------------------------------------
dcsqxdhalfstereo.dsp  |8-bit  |SQXD 2:1          |22050 Hz   |Stereo   
-----------------------------------------------------------------------
dcsqxdvarmono.dsp     |16-bit |SQXD              |Variable   |Mono     
-----------------------------------------------------------------------
adpcmvarmono.dsp      |16-bit |ADPCM             |Variable   |Mono     
-----------------------------------------------------------------------
adpcmmono.dsp         |4-bit  |ADPCM Intel/DVI   |44100      |Mono     
                      |       |4:1               |           |         
-----------------------------------------------------------------------
adpcmhalfmono.dsp     |4-bit  |ADPCM Intel/DVI   |22050      |Mono     
                      |       |4:1               |           |         
-----------------------------------------------------------------------

Although sample data used by the a\Audio folio is typically stored in the AIFC format (an unsupported variation of Apple Computer Inc.'s AIFF format), the data stored in an AIFC file can be compressed with several different compression formats, or it can be uncompressed using literal sample values. The dcsqxd instruments are designed to play compressed sample data with square/xact/delta compression; the adpcm instrument is designed to play compressed sample data using ADPCM compression; the other instruments are designed to play literal data.

A sampled sound instrument's input sample size is the size of the sample it expects to read: 4 bits, 8 bits, or 16 bits. The instrument's output is in 16-bit samples, so if it reads 8-bit original samples it must convert them to 16-bit values. The instruments designed to read literal sample data simply add 8 less-significant bits of 0s to the 8-bit value (10010111 becomes 10010111 00000000, for example). The instruments designed to read sample data compressed in square/xact/delta compression format use a technique to convert the 8-bit samples to 16-bit values with significant information in both the high- and low-order bytes.

Every sampled sound instrument has an output of 44,100 16-bit samples per second (44,100 Hz), a frequency designed for high-fidelity sound reproduction. Some sample data may have been originally recorded at 22,050 Hz, a frequency with less fidelity that requires only half the storage space for samples. If the instruments read those half-frequency tables at a 44,100 Hz playback rate, the sampled sound plays twice as fast and sounds an octave higher than it was recorded. To compensate, several sampled sound instruments have a playback sample rate of 22,050 Hz. When they read 22,050 Hz recorded sample data, they interpolate an intermediate value between each sample read to produce a 44,100 Hz final audio signal that does not change the original sample's pitch or duration.

The instruments sampler.dsp, varmono8.dsp, varmono16.dsp, dcsqxdvarmono.dsp, and adpcmvarmono.dsp have variable playback sample rates. By playing sample data at a rate higher or lower than its original recording rate, these instruments can raise the pitch of the sample higher or lower than it was originally recorded. Keep in mind that you can also use the other instruments to change original pitch if their fixed playback sample rate is different from the original recording's sample rate. For example, using a 44,100 Hz instrument to play back a voice recording made at 22,050 Hz produces the voices an octave higher, that speak twice as fast, a cheap way to produce chipmunk voices.

Portfolio sampled sound voices come in mono and stereo varieties. Mono instruments read sample data so that all samples go in succession to a single output channel. Stereo instruments read sample data so that odd samples go in succession to the left output and even samples go in succession to the right output.

The Music library (described in Music Library Calls, in the 3DO Music and Audio Programmer's Reference) includes a call named SelectSamplePlayer() that returns the name of an appropriate sampled sound instrument to play a given sample.

Sound Synthesis Instruments. Portfolio's sound synthesis instruments generate their own audio signals instead of reading them from sample data. Those instruments are:

Mixers. Mixers are instruments that accept audio signals from other instruments and, after mixing them, pass the final stereo audio signal on to the DAC, which converts them to stereo analog signals. These signals are the signals that the user hears in headphones or on a stereo system.

If two or more mixer instruments operate, their outputs to the DAC are added together. If the results of an added sample frame go over 0x7FFF, it is clipped to 0x7FFF, which can result in horrible distortion so it is important to keep the final results down to an acceptable level. System amplitude allocation is a technique that helps a task avoid clipping distortion (see Allocating Amplitude).

These mixers accept audio signals and feed a final audio signal to the DAC:

The mixers all contain knobs that control how much gain each incoming signal has in the right channel and how much in the left channel.

Keep in mind that you must connect an instrument to a mixer before the instrument can be heard. Only mixers send their output to the DAC, where it is turned into an analog audio signal for reproduction.

Submixers. Submixers, unlike mixers, do not send their mixed stereo signal directly to the DAC. Instead, they provide a left output and a right output that can be sent to another instrument. Portfolio submixer instruments are:

Like mixers, submixers contain knobs that control how incoming signals are balanced in the output signals. Submixers are useful for combining outputs of other instruments and feeding the results into effects instruments.

Effects Instruments. Effects instruments typically accept an audio signal, alter it, and pass the altered signal out. In the case of delay-effects instruments, they accept an audio signal and pass it out through DMA to memory, where it can be altered by another instrument. Portfolio's effects instruments are:

Control Signal Instruments. Control signal instruments put out a control signal, typically a low-frequency signal too slow to be heard by itself but useful for controlling instrument knobs. Sound synthesis instruments can be used as control signal instruments if they are set to a very low frequency and then connected to the knob of another instrument. For example, a triangle wave generator can be set to a low frequency and then connected to the frequency knob of a filtered noise generator to produce a wind sound that rises and falls with the frequency of the triangle wave generator.

Portfolio provides the following important dedicated control-signal instruments:

envelope.dsp is used with instruments that do not contain their own envelope player. Once connected to another instrument's knob, envelope.dsp applies an envelope to the connected instrument by changing the knob values.

Other Instruments.

Instrument Specifications

When you decide to use an instrument, you can find its specifications in Instrument Templates, of the 3DO Music and Audio Programmer's Reference. The specifications give the function of each instrument, and list the name of each knob, FIFO, input, and output available from the instrument. These names are strings that you use in audio calls to specify which knob, FIFO, input, or output you want to affect.

Instrument Resources. The specifications also list the resources required for each instrument: memory requirements, hardware requirements, and a value measured in DSP ticks. DSP ticks are DSP time units used during each frame of the DSP output. (A DSP frame is the time the DSP takes to put out one pair of samples to the DAC, which usually takes place 44,100 times per second.) The DSP can, at this writing, execute 565 ticks per 44,100 Hz frame.

The Audio folio allocates resources as instruments are created from templates and it totals the DSP ticks required for each instrument. If the total number of ticks goes above the possible frame total or the system runs out of other resources necessary for instrument allocation, the Audio folio refuses to allocate any more instruments. It is important to keep track of how many DSP ticks you are using with each instrument creation, because instruments are most likely to use up DSP ticks before using up other system resources.

Loading an Instrument Template

Before you can create an instrument and use it to generate signals, you must first load the instrument's template. To do so, use this call:

Item LoadInsTemplate( char *Name, Item AudioDevice )
The call accepts two pointers: *Name, which points to a string containing the filename of the file that has the instrument template; and AudioDevice, which is the item number of the device on which you want the instrument to be played. Pass 0 for the AudioDevice number if you want to use the system audio device, currently the DSP. In this release, the DSP is the only device available for instrument playback.

When the call executes, it uses the specified instrument template file to create an instrument template item. The call returns the item number of the template if successful, or if unsuccessful, returns a negative number (an error code).

Defining an Instrument Template

If your task uses data streaming to import file images to RAM to avoid accessing CD-ROM later for those files, there can be instrument template file images in the data stream. If so, you can define an instrument template using an instrument template file image already loaded in RAM. To do so, use this call:

Item DefineInsTemplate( uint8 *Definition, int32 NumBytes, Item Device, char *Name )
The call accepts four arguments: *Definition, which is a pointer to the beginning of the instrument template file image; NumBytes, which is the size of the instrument template file image in bytes; Device, which is the item number of the device on which you want the instrument to be played; and *Name, a pointer to the name of the instrument template file image. At this writing, you should use 0 as the device number to specify the DSP. The DSP is the only audio device currently available.

When executed, DefineInsTemplate() uses the instrument template file image to create an instrument template item. If successful, it returns the item number of the instrument template. If unsuccessful, it returns a negative value (an error code).

Creating an Instrument

Once an instrument's template is loaded as an item, a task can use the template to create an instrument defined by the template. To do so, use this call:

Item CreateInstrument (Item InsTemplate, const TagArg *tagList)
The call accepts the item number of a previously loaded instrument template. It also accepts a list of tag arguments. When executed, CreateInstrument() creates an instrument item defined by the instrument template and allocates the DSP and system resources the instrument requires. The call returns the item number of the new instrument if successful; if unsuccessful, it returns a negative value (an error code).

Note that you can create as many instruments as you like from a single loaded instrument template. In fact, some tasks can be set up to create new instruments whenever the task does not have enough voices to play desired notes. For example, if a task needs to play a four-voice chord but has created only three instruments of the same kind to play that chord, it can create one more instrument to play the chord.

Loading an Instrument

To combine the process of loading an instrument template and the process of creating an instrument from that template in a single call, you can use this convenience call:

Item LoadInstrument( char *Name, Item AudioDevice, uint8 Priority )
The call accepts the pointer *Name, which points to a string containing the filename of the file that contains the instrument template. It also accepts the item number AudioDevice of the device in the template on which the instrument is to be played. Both of these arguments are the same as those used in the call LoadInsTemplate(). The third parameter accepted, Priority, is a value from 0 to 200 that sets the priority of the allocated instrument. Priority is the same as the Priority argument to AllocInstrument().

When LoadInstrument() executes, it loads the specified instrument template, and creates an instrument from that template. It returns the item number of the instrument if successful, or a negative number (an error code) if unsuccessful.

The instrument template loaded with this call remains in memory, but there is no item number to use to unload it from memory. To do so, you must use the UnloadInstrument() call (see Deleting an Instrument and Its Template). Before unloading an instrument, be sure to disconnect the it with the DisconnectInstruments() call (see Disconnecting One Instrument From Another). See also "CreateInstrument" in Audio Folio Calls.