The Java 3D API Specification Contents Previous Next Index


C H A P T E R12

Audio Devices




A Java 3D application running on a particular machine could have one of several options available to it for playing the audio image created by the sound renderer. Perhaps the machine on which Java 3D is executing has more than one sound card (for example, one that is a wave table synthesis card and the other with accelerated sound spatialization hardware). Furthermore, suppose there are Java 3D audio device drivers that execute Java 3D audio methods on each of these specific cards. The application would therefore have at least two audio device drivers through which the audio could be produced. For such a case the Java 3D application must choose the audio device driver with which sound rendering is to be performed. Once this audio device is chosen, the application can additionally select the type of audio playback on which device the rendered sound image is to be output. The playback device (headphones or speaker(s)) is physically connected to the port to which the selected device driver outputs.

12.1 AudioDevice Interface

The selection of this device driver is done through methods in the PhysicalEnvironment object (see Section C.9, "The PhysicalEnvironment Object"). The application queries how many audio devices are available. For each device, the user can get the AudioDevice object that describes it and query its characteristics. Once a decision is made about which of the available audio devices to use for a PhysicalEnvironment, the particular device is set into this PhysicalEnvironment's fields. Each PhysicalEnvironment object may use only a single audio device.

The AudioDevice object interface specifies an abstract audio device that creators of Java 3D class libraries would implement for a particular device. Java 3D uses several methods to interact with specific devices. Since all audio devices implement this consistent interface, the user could have a portable means of initializing, setting particular audio device elements, and querying generic characteristics for any audio device.

Constants
public static final int HEADPHONES
Specifies that audio playback will be through stereo headphones.

public static final int MONO_SPEAKER
Specifies that audio playback will be through a single speaker some distance away from the listener.

public static final int STEREO_SPEAKERS
Specifies that audio playback will be through stereo speakers some distance away from, and at some angle to, the listener.

12.1.1 Initialization

Each audio device driver must be initialized. The chosen device driver should be initialized before any Java 3D Sound methods are executed because the implementation of the Sound methods, in general, is potentially device-driver dependent.

Methods
public abstract boolean initialize()
Initializes the audio device. Exactly what occurs during initialization is implementation dependent. This method provides explicit control by the user over when this initialization occurs.

public abstract boolean close()
Closes the audio device, releasing resources associated with this device.

12.1.2 Audio Playback

Methods to set and retrieve the audio playback parameters are part of the AudioDevice object. The audio playback information specifies that playback will be through one of the following:

The type of playback chosen affects the sound image generated. Cross-talk cancellation is applied to the audio image if playback over stereo speakers is selected.

Methods
The following methods affect the playback of sound processed by the Java 3D sound renderer:

public abstract void setAudioPlaybackType(int type)
public abstract int getAudioPlaybackType()
These methods set and retrieve the type of audio playback device (HEADPHONES, MONO_SPEAKER, or STEREO_SPEAKERS) used to output the analog audio from rendering Java 3D Sound nodes.

public abstract void setCenterEarToSpeaker(float distance)
public abstract float getCenterEarToSpeaker()
These methods set and retrieve the distance in meters from the center ear (the midpoint between the left and right ears) and one of the speakers in the listener's environment. For monaural speaker playback, a typical distance from the listener to the speaker in a workstation cabinet is 0.76 meters. For stereo speakers placed at the sides of the display, this might be 0.82 meters.

public abstract void setAngleOffsetToSpeaker(float angle)
public abstract float getAngleOffsetToSpeaker()
These methods set and retrieve the angle, in radians, between the vectors from the center ear to each of the speaker transducers and the vectors from the center ear parallel to the head coordinate's z-axis. Speakers placed at the sides of the computer display typically range between 0.175 and 0.350 radians (between 10 and 20 degrees).

public abstract PhysicalEnvironment getPhysicalEnvironment()
This method returns a reference to the AudioDevice's PhysicalEnvironment object.

12.1.3 Device-Driver-Specific Data

While the sound image created for final output to the playback system is either only monaural or stereo (for this version of Java 3D), most device-driver implementations will mix the left and right image signals generated for each rendered sound source before outputting the final playback image. Each sound source will use N input channels of this internal mixer.

Each implemented Java 3D audio device driver will have its own limitations and driver-specific characteristics. These include channel availability and usage (during rendering). Methods for querying these device-driver-specific characteristics follow.

Methods
public abstract int getTotalChannels()
This method retrieves the maximum number of channels available for Java 3D sound rendering for all sound sources.

public abstract int getChannelsAvailable()
During rendering, when Sound nodes are playing, this method returns the number of channels still available to Java 3D for rendering additional Sound nodes.

public abstract int getChannelsUsedForSound(Sound node)
This method queries the number of channels that are used or would be used to render a particular sound node. This method returns the number of channels needed to render a particular Sound node. The return value is the same no matter if the Sound is currently active and enabled (being played) or is inactive.

12.2 AudioDevice3D Interface

The AudioDevice3D Class extends the AudioDevice interface. The intent is for this interface to be implemented by AudioDevice driver developers (whether a Java 3D licensee or not). Each implementation will use a sound engine of its choice.

The methods in this interface should not be called an application. The methods in this interface are referenced by the core Java 3D Sound classes to render live, scheduled sound on the AudioDevice chosen by the application or the use chosen by the application or user.

Methods in this interface provide the Java 3D core a generic way to set and query the audio device on which the application has chosen to perform audio rendering. Methods in this interface include

Constants
public static final int BACKGROUND_SOUND
public static final int POINT_SOUND
public static final int CONE_SOUND
These constants specify the sound types. Sound types match the Sound node classes defined for Java 3D core for BackgroundSound, PointSound, and ConeSound. The type of sound a sample is loaded as determines which methods affect it.

public static final int STREAMING_AUDIO_DATA
public static final int BUFFERED_AUDIO_DATA
These constants specify the sound data types. Samples can be processed as streaming or buffered data. Fully spatializing sound sources may require data to be buffered.

Sound data specified as streaming is not copied by the AudioDevice diver implementation. It is up to the application to ensure that this data is continuously accessible during sound rendering. Futhermore, full sound spatialization may not be possible, for all AudioDevice3D implementations on unbuffered sound data. Sound data specified as buffered is copied by the AudioDevice driver implementation.

Methods
public abstract void setView(View reference)
This method accepts a reference to the current View and passes reference to the current View Object. The PhysicalEnvironment parameters (with playback type and speaker placement) and the PhysicalBody parameters (position and orientation of ears) can be obtained from this object and from the transformations to and from ViewPlatform coordinate (the space the listener's head is in) and Virtual World coordinates (the space the sounds are in).

public abstract int prepareSound(int soundType,
       MediaContainer  soundData)
Prepares the sound. This method accepts a reference to the MediaContainer that contains a reference to sound data and information about the type of data it is. The soundType parameter defines the type of sound associated with this sample (Background, Point, or Cone).

Depending on the type of MediaContainer the sound data is and on the implementation of the AudioDevice used, sound data preparation could consist of opening, attaching, or loading sound data into the device. Unless the cached is true, this sound data should not be copied, if possible, into host or device memory.

Once this preparation is complete for the sound sample, an AudioDevice-specific index, used to reference the sample in future method calls, is returned. All the rest of the methods that follow require this index as a parameter.

public abstract void clearSound(int index)
Clears the sound. This method requests that the AudioDevice free all resources associated with the sample with index id.

public abstract long getSampleDuration(int index)
Queries sample duration. If it can be determined, this method returns the duration in milliseconds of the sound sample. For noncached streams, this method returns Sound.DURATION_UNKNOWN.

public abstract int getNumberOfChannelsUsed(int index)
public abstract int getNumberOfChannelsUsed(int index,
       boolean  muted)
Query the number of channels used by Sound. These methods return the number of channels (on the executing audio device) that this sound is using if it is already playing or those it is expected to use if it were to begin playing. The first method takes the sound's current state (including whether it is muted or unmuted) into account. The second method uses the muted parameter to make the determination.

For some AudioDevice3D implementations

public abstract int startSample(int index)
Starts sample. This method begins a sound playing on the AudioDevice and returns a flag indicating whether the sample was started.

public abstract int stopSample(int index)
Stops sample. This method stops the sound on the AudioDevice and returns a flag indicating whether the sample was stopped.

public abstract long getStartTime(int index)
Queries the last start time for this sound on the device. This method returns the system time of when the sound was last "started." Note that this start time will be as accurate as the AudioDevice implementation can make it, but that it is not guaranteed to be exact.

public abstract void setSampleGain(int index, float scaleFactor)
Sets gain scale factor. This method sets the overall gain scale factor applied to data associated with this source to increase or decrease its overall amplitude. The gain scaleFactor value passed into this method is the combined value of the Sound node's initial gain and the current AuralAttribute gain scale factors.

public abstract void setDistanceGain(int index,
       double[]  frontDistance, float[]  frontAttenuationScaleFactor,
       double[]  backDistance, float[] backAttenuationScaleFactor)
Sets distance gain. This method sets this sound's distance gain elliptical attenuation (not including the filter cutoff frequency) by defining corresponding arrays containing distances from the sound's origin and gain scale factors applied to all active positional sounds. The gain scale factor is applied to sound based on the distance the listener is from the sound source. These attenuation parameters are ignored for BackgroundSound nodes. The backAttenuationScaleFactor parameter is ignored for PointSound nodes.

For a full description of the attenuation parameters, see Section 6.9.3, "ConeSound Node."

public abstract void setDistanceFilter(int filterType,
       double[]  distance, float[]  filterCutoff)
Sets AuralAttributes distance filter. This method sets the distance filter corresponding arrays containing distances and frequency cutoff applied to all active positional sounds. The gain scale factor is applied to sound based on the distance the listener is from the sound source. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setLoop(int index, int count)
Sets loop count. This method sets the number of times sound is looped during play. For a complete description of this method, see the description for the Sound.setLoop method in Section 6.9, "Sound Node."

public abstract void muteSample(int index)
public abstract void unmuteSample(int index)
These methods mute and unmute a playing sound sample. The first method makes a sample play silently. The second method makes a silently playing sample audible. Ideally, the muting of a sample is implemented by stopping a sample and freeing channel resources (rather than just setting the gain of the sample to zero). Ideally, the unmuting of a sample restarts the muted sample by offset from the beginning by the number of milliseconds since the time the sample began playing.

public abstract void pauseSample(int index)
public abstract void unpauseSample(int index)
These methods pause and unpause a playing sound sample. The first method temporarily stops a cached sample from playing without resetting the sample's current pointer back to the beginning of the sound data so that at a later time it can be un-paused from the same location in the sample when the pause was initiated. The second method restarts the paused sample from the location in the sample where it was paused.

public abstract void setPosition(int index, Point3d position)
Sets position. This method sets this sound's location (in Local coordinates) from the provided position.

public abstract void setDirection(int index, Vector3d direction)
Sets direction. This method sets this sound's direction from the local coordinate vector provided. For a full description of the direction parameter, see Section 6.9.3, "ConeSound Node."

public abstract void setVworldXfrm(int index, Transform3D trans)
Sets virtual world transform. This method passes a reference to the concatenated transformation to be applied to local sound position and direction parameters.

public abstract void setRolloff(float rolloff)
Sets AuralAttributes gain rolloff. This method sets the speed-of-sound factor. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setAngularAttenuation(int index,
       int  filterType, double[] angle,
       float[]  attenuationScaleFactor, float[]  filterCutoff)
Sets angular attenuation. This method sets this sound's angular gain attenuation (including filter) by defining corresponding arrays containing angular offsets from the sound's axis, gain scale factors, and frequency cutoff applied to all active directional sounds. Gain scale factor is applied to sound based on the angle between the sound's axis and the ray from the sound source origin to the listener. The form of the attenuation parameter is fully described in Section 6.9.3, "ConeSound Node."

public abstract void setReflectionCoefficient(float coefficient)
Sets AuralAttributes reverberation coefficient. This method sets the reflective or absorptive characteristics of the surfaces in the region defined by the current Soundscape region. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setReverbDelay(float reverbDelay)
Sets AuralAttributes reverberation delay. This method sets the delay time between each order of reflection (while reverberation is being rendered) explicitly given in milliseconds. A value for delay time of 0.0 disables reverberation. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setReverbOrder(int reverbOrder)
Sets AuralAttributes reverberation order. This method sets the number of times reflections are added to reverberation being calculated. A value of -1 specifies an unbounded number of reverberations. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setFrequencyScaleFactor(float
       frequencyScaleFactor)
Sets AuralAttributes frequency scale factor. This method specifies a scale factor applied to the frequency (or wavelength). This parameter can also be used to expand or contract the usual frequency shift applied to the sound source due to Doppler effect calculations. Valid values are 0.0. A value greater than 1.0 will increase the playback rate. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void setVelocityScaleFactor(float
       velocityScaleFactor)
Sets AuralAttributes velocity scale factor. This method specifies a velocity scale factor applied to the velocity of sound relative to the listener's position and movement in relation to the sound's position and movement. This scale factor is multiplied by the calculated velocity portion of Doppler effect equation used during sound rendering. For a full description of this parameter and how it is used, see Section 8.1.17, "AuralAttributes Object."

public abstract void updateSample(int index)
Explicitly updates a sample. This method is called when a Sound is to be explicitly updated. It is called only when all of a sound's parameters are known to have been passed to the audio device. In this way, an implementation can choose to perform lazy evaluation of a sample, rather than updating the rendering state of the sample after every individual parameter changed. This method can be left as a null method if the implementor so chooses.

12.3 Instantiating and Registering a New Device

A browser or applications developer must instantiate whatever system-specific audio devices that he or she needs and that exist on the system. This device information typically exists in a site configuration file. The browser or application will instantiate the physical environment as requested by the end user.

The API for instantiating devices is site-specific, but it consists of a device object with a constructor and at least all of the methods specified in the AudioDevice interface.

Once instantiated, the browser or application must register the device with the Java 3D sound scheduler by associating this device with a PhysicalEnvironment object. The setAudioDevice method introduces new devices to the Java 3D environment, and the allAudioDevices method produces an enumeration that allows examination of all available devices within a Java 3D environment. See Section C.9, "The PhysicalEnvironment Object," for more details.



   The Java 3D API Specification Contents Previous Next Index


Copyright © 2000, Sun Microsystems, Inc. All rights reserved.