The Java 3D API Specification |
A P P E N D I X C |
View Model Details |
C.1
Both camera-based and Java 3D-based view models allow a programmer to specify the shape of a view frustum and, under program control, to place, move, and reorient that frustum within the virtual environment. However, how they do this varies enormously. Unlike the camera-based system, the Java 3D view model allows slaving the view frustum's position and orientation to that of a six-degrees-of-freedom tracking device. By slaving the frustum to the tracker, Java 3D can automatically modify the view frustum so that the generated images match the end-user's viewpoint exactly.An Overview of the Java 3D View Model
C.2
Imagine an application where the end user sits on a magic carpet. The application flies the user through the virtual environment by controlling the carpet's location and orientation within the virtual world. At first glance, it might seem that the application also controls what the end user will see-and it does, but only superficially.Physical Environments and Their Effects
C.2.1
Imagine that the end user sees the magic carpet and the virtual world with a head-mounted display and head tracker. As the application flies the carpet through the virtual world, the user may turn to look to the left, to the right, or even toward the rear of the carpet. Because the head tracker keeps the renderer informed of the user's gaze direction, it might not need to draw the scene directly in front of the magic carpet. The view that the renderer draws on the head-mount's display must match what the end user would see if the experience had occurred in the real world.A Head-Mounted Example
C.2.2
Imagine a slightly different scenario where the end user sits in a darkened room in front of a large projection screen. The application still controls the carpet's flight path; however, the position and orientation of the user's head barely influences the image drawn on the projection screen. If a user looks left or right, then he or she sees only the darkened room. The screen does not move. It's as if the screen represents the magic carpet's "front window" and the darkened room represents the "dark interior" of the carpet.A Room-Mounted Example
C.2.3
In the head-mounted example, the user's head position and orientation significantly affects a camera model's camera position and orientation but hardly has any effect on the projection matrix. In the room-mounted example, the user's head position and orientation contributes little to a camera model's camera position and orientation; however, it does affect the projection matrix.Impact of Head Position and Orientation on the Camera
C.3
The basic view model consists of eight or nine coordinate systems, depending on whether the end-user environment consists of a room-mounted display or a head-mounted display. First, we define the coordinate systems used in a room-mounted display environment. Next, we define the added coordinate system introduced when using a head-mounted display system.The Coordinate Systems
C.3.1
The room-mounted coordinate system is divided into the virtual coordinate system and the physical coordinate system. Figure C-1 shows these coordinate systems graphically. The coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Note that the coexistence coordinate system exists in both worlds.Room-Mounted Coordinate Systems
C.3.1.1
The Virtual Coordinate Systems The Virtual World Coordinate System
The virtual world coordinate system encapsulates the unified coordinate system for all scene graph objects in the virtual environment. For a given View, the virtual world coordinate system is defined by the Locale object that contains the ViewPlatform object attached to the View. It is a right-handed coordinate system with +x to the right, +y up, and +z toward the viewer.
The ViewPlatform Coordinate System
The ViewPlatform coordinate system is the local coordinate system of the ViewPlatform leaf node to which the View is attached.
Figure C-1 Display Rigidly Attached to the Tracker Base
The Coexistence Coordinate System
A primary implicit goal of any view model is to map a specified local portion of the physical world onto a specified portion of the virtual world. Once established, one can legitimately ask where the user's head or hand is located within the virtual world or where a virtual object is located in the local physical world. In this way the physical user can interact with objects inhabiting the virtual world, and vice versa. To establish this mapping, Java 3D defines a special coordinate system, called coexistence coordinates, that is defined to exist in both the physical world and the virtual world.
C.3.1.2
The Physical Coordinate Systems The Head Coordinate System
The head coordinate system allows an application to import its user's head geometry. The coordinate system provides a simple consistent coordinate frame for specifying such factors as the location of the eyes and ears.
The Image Plate Coordinate System
The image plate coordinate system corresponds with the physical coordinate system of the image generator. The image plate is defined as having its origin at the lower left-hand corner of the display area and as lying in the display area's XY plane. Note that image plate is a different coordinate system than either left image plate or right image plate. These last two coordinate systems are defined in head-mounted environments only (see Section C.3.2, "Head-Mounted Coordinate Systems").
The Head Tracker Coordinate System
The head tracker coordinate system corresponds to the six-degrees-of-freedom tracker's sensor attached to the user's head. The head tracker's coordinate system describes the user's instantaneous head position.
The Tracker Base Coordinate System
The tracker base coordinate system corresponds to the emitter associated with absolute position/orientation trackers. For those trackers that generate relative position/orientation information, this coordinate system is that tracker's initial position and orientation. In general, this coordinate system is rigidly attached to the physical world.
C.3.2
Head-mounted coordinate systems divide the same virtual coordinate systems and the physical coordinate systems. Figure C-2 shows these coordinate systems graphically. As with the room-mounted coordinate systems, the coordinate systems within the grayed area exist in the virtual world; those outside exist in the physical world. Once again, the coexistence coordinate system exists in both worlds. The arrangement of the coordinate system differs from those for a room-mounted display environment. The head-mounted version of Java 3D's coordinate system differs in another way. It includes two image plate coordinate systems, one for each of an end-user's eyes.Head-Mounted Coordinate Systems
The Left Image Plate and Right Image Plate Coordinate Systems
The left image plate and right image plate coordinate systems correspond with the physical coordinate system of the image generator associated with the left and right eye, respectively. The image plate is defined as having its origin at the lower left-hand corner of the display area and lying in the display area's XY plane. Note that the left image plate's XY plane does not necessarily lie parallel to the right image plate's XY plane. Note that the left image plate and the right image plate are different coordinate systems than the room-mounted display environment's image plate coordinate system.
Figure C-2 Display Rigidly Attached to the Head Tracker (Sensor)
C.4
The ViewPlatform object is a leaf object within the Java 3D scene graph. The ViewPlatform object is the only portion of Java 3D's viewing model that resides as a node within the scene graph. Changes to TransformGroup nodes in the scene graph hierarchy above a particular ViewPlatform object move the view's location and orientation within the virtual world (see Section 9.4, "ViewPlatform: A Place in the Virtual World"). The ViewPlatform object also contains a ViewAttachPolicy and an ActivationRadius (see Section 6.11, "ViewPlatform Node," for a complete description of the ViewPlatform API).The ViewPlatform Object
C.5
The View object is the central Java 3D object for coordinating all aspects of a viewing situation. All parameters that determine the viewing transformation to be used in rendering on a collected set of canvases in Java 3D are directly contained either within the View object or within objects pointed to by a View object (or pointed to by these, etc.). Java 3D supports multiple simultaneously active View objects, each of which controls its own set of canvases.The View Object public void setTrackingEnable(boolean flag) public boolean getTrackingEnable()These methods set and retrieve a flag specifying whether to enable the use of six-degrees-of-freedom tracking hardware.public void getUserHeadToVworld(Transform3D t)This method retrieves the user-head-to-vworld coordinate system transform. This Transform3D object takes points in the user's head coordinate system and transforms them into points in the virtual world coordinate system. This value is read-only. Java 3D continually generates it, but only if enabled by using thesetUserHeadToVworldEnable
method.public void setUserHeadToVworldEnable(boolean flag) public boolean getUserHeadToVworldEnable()These methods set and retrieve a flag that specifies whether to generate the user-head-to-vworld transform (initiallyfalse
) repeatedly.public String toString()This method returns a string that contains the values of this View object.
C.5.1
The view policy informs Java 3D whether it should generate the view using the head-tracked system of transformations or the head-mounted system of transformations. These policies are attached to the Java 3D View object.View Policy public void setViewPolicy(int policy) public int getViewPolicy()These two methods set and retrieve the current policy for view computation. Thepolicy
variable specifies how Java 3D uses its transforms in computing new viewpoints, as follows:
- SCREEN_VIEW: Specifies that Java 3D should compute new viewpoints using the sequence of transforms appropriate to nonattached, screen-based head-tracked display environments, such as fishtank VR, multiple-projection walls, and VR desks. This is the default setting.
- HMD_VIEW: Specifies that Java 3D should compute new viewpoints using the sequence of transforms appropriate to head-mounted display environments. This policy is not available in compatibility mode (see Section C.11, "Compatibility Mode").
C.5.2
The screen scale policy specifies where the screen scale comes from. The policy can be one of the following:Screen Scale Policy
- SCALE_EXPLICIT: Specifies that the screen scale is taken from the user-provided
screenScale
attribute.
- SCALE_SCREEN_SIZE: Specifies that the screen scale is derived from the physical screen according to the following formula. This is the default policy.
screenScale = physicalScreenWidth / 2.0public void setScreenScalePolicy(int policy) public int getScreenScalePolicy()These methods set and retrieve the current screen scale policy.public void setScreenScale(double scale) public double getScreenScale()These methods set and retrieve the screen scale value. This value is used when the screen scale policy isSCALE_EXPLICIT
.
C.5.3
The window eyepoint policy comes into effect in a non-head-tracked environment. The policy tells Java 3D how to construct a new view frustum based on changes in the field of view and in the Canvas3D's location on the screen. The policy comes into effect only when the application changes a parameter that can change the placement of the eyepoint relative to the view frustum.Window Eyepoint Policy public static final int RELATIVE_TO_FIELD_OF_VIEWThis variable tells Java 3D that it should modify the eyepoint position so it is located at the appropriate place relative to the window to match the specified field of view. This implies that the view frustum will change whenever the application changes the field of view. In this mode, the eye position is read-only. This is the default setting.public static final int RELATIVE_TO_SCREENThis variable tells Java 3D to interpret the eye's position relative to the entire screen. No matter where an end user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to the screen. This implies that the view frustum changes shape whenever an end user moves the location of a window on the screen. In this mode, the field of view is read-only.public static final int RELATIVE_TO_WINDOWThis variable specifies that Java 3D should interpret the eye's position information relative to the window (Canvas3D). No matter where an end user moves a window (a Canvas3D), Java 3D continues to interpret the eye's position relative to that window. This implies that the frustum remains the same no matter where the end user moves the window on the screen. In this mode, the field of view is read-only.public static final int RELATIVE_TO_COEXISTENCEThis variable specifies that Java 3D should interpret the fixed eyepoint position in the view as relative to the origin of coexistence coordinates. This eyepoint is transformed from coexistence coordinates to image plate coordinates for each Canvas3D. As inRELATIVE_TO_SCREEN
mode, this implies that the view frustum shape will change whenever a user moves the location of a window on the screen.public int getWindowEyepointPolicy() public void setWindowEyepointPolicy(int policy)This variable specifies how Java 3D handles the predefined eyepoint in a non-head-tracked application. The variable can contain one of four values:RELATIVE_TO_FIELD_OF_VIEW
,RELATIVE_TO_SCREEN
,RELATIVE_TO_WINDOW
, orRELATIVE_TO_COEXISTENCE
. The default value isRELATIVE_TO_FIELD_OF_VIEW.
C.5.4
This policy specifies how Java 3D generates a monoscopic view.Monoscopic View Policy public static final int LEFT_EYE_VIEW public static final int RIGHT_EYE_VIEW public static final int CYCLOPEAN_EYE_VIEWThese constants specify the monoscopic view policy. The first constant specifies that the monoscopic view should be the view as seen from the left eye. The second constant specifies that the monoscopic view should be the view as seen from the right eye. The third constant specifies that the monoscopic view should be the view as seen from the "center eye," the fictional eye half-way between the left and right eyes. This is the default setting.public void setMonoscopicViewPolicy(int policy) public int getMonoscopicViewPolicy()These methods are deprecated. Use theCanvas3D.setMonoscopicViewPolicy
andCanvas3D.getMonoscopicViewPolicy
methods.
C.5.5
This policy specifies how visible and invisible objects are drawn.Visibility Policy public static final int VISIBILITY_DRAW_VISIBLE public static final int VISIBILITY_DRAW_INVISIBLE public static final int VISIBILITY_DRAW_ALLThese constants set the visibility policy for this view. The first constant specifies that only visible objects are drawn (this is the default). The second constant specifies that only invisible objects are drawn. The third constant specifies that both visible and invisible objects are drawn.public void setVisibilityPolicy(int policy) public int getVisibilityPolicy()These methods set and retrieve the visibility policy for this view. The policy can be one ofVISIBILITY_DRAW_VISIBLE
,VISIBILITY_DRAW_INVISIBLE
, orVISIBIL-ITY_DRAW_ALL
. The default visibility policy isVISIBILITY_DRAW_VISIBLE
.
C.5.6
Coexistence Centering Enable public void setCoexistenceCenteringEnable(boolean flag) public boolean getCoexistenceCenteringEnable()These methods set and retrieve the coexistenceCentering enable flag. If the coexistenceCentering flag is true, the center of coexistence in image plate coordinates, as specified by the trackerBaseToImagePlate transform, is translated to the center of either the window or the screen in image plate coordinates, according to the value of windowMovementPolicy.public void setLeftManualEyeInCoexistence(Point3d position) public void setRightManualEyeInCoexistence(Point3d position) public void getLeftManualEyeInCoexistence(Point3d position) public void getRightManualEyeInCoexistence(Point3d position)These methods set and retrieve the position of the manual right and left eyes in coexistence coordinates. These values determine eye placement when a head tracker is not in use and the application is directly controlling the eye position in coexistence coordinates. These values are ignored when in head-tracked mode or when the windowEyepointPolicy is notRELATIVE_TO_COEXISTENCE
.
C.5.8
Sensors and Their Location in the Virtual World public void getSensorToVworld(Sensor sensor, Transform3D t) public void getSensorHotSpotInVworld(Sensor sensor, Point3d position) public void getSensorHotSpotInVworld(Sensor sensor, Point3f position)The first method takes the sensor's last reading and generates a sensor-to-vworld coordinate system transform. This Transform3D object takes points in that sensor's local coordinate system and transforms them into virtual world coordinates. The next two methods retrieve the specified sensor's last hotspot location in virtual world coordinates.
C.6
A Screen3D object represents one independent display device. The most common environment for a Java 3D application is a desktop computer with or without a head tracker. Figure C-3 shows a scene graph fragment for a display environment designed for such an end-user environment. Figure C-4 shows a display environment that matches the scene graph fragment in Figure C-3.The Screen3D Object
Figure C-3 A Portion of a Scene Graph Containing a Single Screen3D Object
Figure C-4 A Single-Screen Display Environment
A multiple-projection wall display presents a more exotic environment. Such environments have multiple screens, typically three or more. Figure C-5 shows a scene graph fragment representing such a system ,and Figure C-6 shows the corresponding display environment.
Figure C-5 A Portion of a Scene Graph Containing Three Screen3D Objects
Figure C-6 A Three-Screen Display Environment
C.6.1
The Screen3D object is the 3D version of AWT's screen object (see Section 9.8, "The Screen3D Object"). To use a Java 3D system, someone or some program must calibrate the Screen3D object with the coexistence volume. These methods allow that person or program to inform Java 3D of those calibration parameters.Screen3D Calibration Parameters
Measured Parameters
These calibration parameters are set once, typically by a browser, calibration program, system administrator, or system calibrator, not by an applet.public void setPhysicalScreenWidth(double width) public void setPhysicalScreenHeight(double height)These methods store the screen's (image plate's) physical width and height in meters. The system administrator or system calibrator must provide these values by measuring the display's active image width and height. In the case of a head-mounted display, this should be the display's apparent width and height at the focal plane.
C.6.2
Accessing and Changing Head Tracker Coordinates public void setTrackerBaseToImagePlate(Transform3D t) public void getTrackerBaseToImagePlate(Transform3D t)These methods set and get the tracker-base-to-image-plate coordinate system transform. This transform is typically a calibration constant. This is used only inSCREEN_VIEW
mode. Users must recalibrate whenever the image plate moves relative to the tracker.public void setHeadTrackerToLeftImagePlate(Transform3D t) public void getHeadTrackerToLeftImagePlate(Transform3D t) public void setHeadTrackerToRightImagePlate(Transform3D t) public void getHeadTrackerToRightImagePlate(Transform3D t)These methods set and get the head-tracker-to-left-image-plate and head-tracker-to-right-image-plate coordinate system transforms, respectively. These transforms are typically calibration constants. They are used only inHMD_VIEW
mode.
C.7
Java 3D provides special support for those applications that wish to manipulate an eye position even in a non-head-tracked display environment. One situation where such a facility proves useful is an application that wishes to generate a very high-resolution image composed of lower-resolution tiled images. The application must generate each tiled component of the final image from a common eye position with respect to the composite image but a different eye position from the perspective of each individual tiled element.The Canvas3D Object public boolean getSceneAntialiasingAvailable()This method returns a status flag indicating whether scene antialiasing is available.
C.7.2
A Canvas3D object provides sophisticated applications with access to the eye's position information in head-tracked, room-mounted runtime environments. It also allows applications to manipulate the position of an eye relative to an image plate in non-head-tracked runtime environments.Accessing and Modifying an Eye's Image Plate Position public void setLeftManualEyeInImagePlate(Point3d position) public void setRightManualEyeInImagePlate(Point3d position) public void getLeftManualEyeInImagePlate(Point3d position) public void getRightManualEyeInImagePlate(Point3d position)These methods set and retrieve the position of the manual left and right eyes in image plate coordinates. These values determine eye placement when a head tracker is not in use and the application is directly controlling the eye position in image plate coordinates. In head-tracked mode or when thewindowEyepointPolicy
isRELATIVE_TO_FIELD_OF_VIEW
orRELATIVE_TO_COEXISTENCE
, this value is ignored. When thewindowEyepointPolicy
isRELATIVE_TO_WINDOW
, only the Z value is used.public void getLeftEyeInImagePlate(Point3d position) public void getRightEyeInImagePlate(Point3d position) public void getCenterEyeInImagePlate(Point3d position)These methods retrieve the actual position of the left eye, right eye, and center eye in image plate coordinates and copy that value into the object provided. The center eye is the fictional eye half-way between the left and right eye. These three values are a function of thewindowEyepointPolicy
; the tracking enable flag, and the manual left, right, and center eye positions.public void getPixelLocationInImagePlate(int x, int y, Point3d imagePlatePoint) public void getPixelLocationInImagePlate(Point2d pixelLocation, Point3d imagePlatePoint)These methods compute the position of the specified AWT pixel value in image plate coordinates and copy that value into the object provided.public void getPixelLocationFromImagePlate(Point3d imagePlatePoint, Point2d pixelLocation)This method projects the specified point from image plate coordinates into AWT pixel coordinates. The AWT pixel coordinates are copied into the object provided.public void getVworldToImagePlate(Transform3D t)This method retrieves the current virtual-world-to-image-plate coordinate system transform and places it into the specified object.public void getImagePlateToVworld(Transform3D t)This method retrieves the current image-plate-to-virtual-world coordinate system transform and places it into the specified object.public double getPhysicalWidth() public double getPhysicalHeight()These methods retrieve the physical width and height of this canvas window, in meters.public void setMonoscopicViewPolicy(int policy) public int getMonoscopicViewPolicy()These methods set and retrieve the policy regarding how Java 3D generates monoscopic view. If the policy is set toView.LEFT_EYE_VIEW
, the view generated corresponds to the view as seen from the left eye. If set toView.RIGHT_EYE_VIEW
, the view generated corresponds to the view as seen from the right eye. If set toView.CYCLOPEAN_EYE_VIEW
, the view generated corresponds to the view as seen from the "center eye," the fictional eye half-way between the left and right eye. The default monoscopic view policy isView.CYCLOPEAN_EYE_VIEW
.
Note: For backward compatibility with Java 3D 1.1, if this attribute is set to its default value ofView.CYCLOPEAN_EYE_VIEW
, the monoscopic view policy in theView
object will be used. An application should not use both the deprecatedView
method and thisCanvas3D
method at the same time.
C.8
The PhysicalBody object contains information concerning the physical characteristics of the end-user's body. The head parameters allow end users to specify their own heads' characteristics and thus to customize any Java 3D application so that it conforms to their unique geometry. The PhysicalBody object defines head parameters in the head coordinate system. It provides a simple and consistent coordinate frame for specifying such factors as the location of the eyes and thus the interpupilary distance.The PhysicalBody Object
The Head Coordinate System
The head coordinate system has its origin on the head's bilateral plane of symmetry, roughly half-way between the left and right eyes. The origin of the head coordinate system is known as the center eye. The positive X-axis extends to the right. The positive Y-axis extends up. The positive Z-axis extends into the skull. Values are in meters.public PhysicalBody()Constructs a default user PhysicalBody object with the following default eye and ear positions:
public PhysicalBody(Point3d leftEyePosition, Point3d rightEyePosition) public PhysicalBody(Point3d leftEyePosition, Point3d rightEyePosition, Point3d leftEarPosition, Point3d rightEarPosition)These methods construct a PhysicalBody object with the specified eye and ear positions.public void getLeftEyePosition(Point3d position) public void setLeftEyePosition(Point3d position) public void getRightEyePosition(Point3d position) public void setRightEyePosition(Point3d position)These methods set and retrieve the position of the center of rotation of a user's left and right eyes in head coordinates.public void getLeftEarPosition(Point3d position) public void setLeftEarPosition(Point3d position) public void getRightEarPosition(Point3d position) public void setRightEarPosition(Point3d position)These methods set and retrieve the position of the user's left and right ear positions in head coordinates.public double getNominalEyeHeightFromGround() public void setNominalEyeHeightFromGround(double height)These methods set and retrieve the user's nominal eye height as measured from the ground to the center eye in the default posture. In a standard computer monitor environment, the default posture would be seated. In a multiple-projection display room environment or a head-tracked environment, the default posture would be standing.public double getNominalEyeOffsetFromNominalScreen() public void setNominalEyeOffsetFromNominalScreen(double offset)These methods set and retrieve the offset from the center eye to the center of the display screen. This offset distance allows an "over the shoulder" view of the scene as seen by the end user.public void setHeadToHeadTracker(Transform3D t) public void getHeadToHeadTracker(Transform t)These methods set and retrieve the head-to-head-tracker coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This transform is used in bothSCREEN_VIEW
andHMD_VIEW
modes.public String toString()This method returns a string that contains the values of this PhysicalBody object.
C.9
The PhysicalEnvironment object contains information about the local physical world of the end-user's physical environment. This includes information about audio output devices and tracking sensor hardware, if present.The PhysicalEnvironment Object public PhysicalEnvironment()Constructs and initializes a new PhysicalEnvironment object with the following default parameters:
public PhysicalEnvironment(int sensorCount)Constructs and initializes a new PhysicalEnvironment object.The sensor information provides real-time access to continuous-input devices such as joysticks and trackers. It also contains two-degrees-of-freedom joystick and six-degrees-of-freedom tracker information. See Section 11.2, "Sensors," for more information. Java 3D uses Java AWT's event model for noncontinuous input devices such as keyboards (see Chapter 11, "Input Devices and Picking").
Audio device information associated with the PhysicalEnvironment object provides a mechanism that allows the application to choose a particular audio device (if more than one is available) and explicitly set the type of audio playback for sound rendered using this device. See Chapter 12, "Audio Devices," for more details on the fields and methods that set and initialize the device driver and output playback associated with the audio device.
Methods
The PhysicalEnvironment object specifies the following methods pertaining to audio output devices and input sensors.public void setAudioDevice(AudioDevice device)This method selects the specified AudioDevice object as the device through which audio rendering for this PhysicalEnvironment will be performed.public AudioDevice getAudioDevice()This method retrieves the specified AudioDevice object.public void addInputDevice(InputDevice device) public void removeInputDevice(InputDevice device)These methods add and remove an input device to or from the list of input devices.public Enumeration getAllInputDevices()This method creates an enumerator that produces all input devices.public void setSensorCount(int count) public int getSensorCount()These methods set and retrieve the count of the number of sensors stored within the PhysicalEnvironment object. It defaults to a small number of sensors. It should be set to the number of sensors available in the end-user's environment before initializing the Java 3D API.public void setCoexistenceToTrackerBase(Transform3D t) public void getCoexistenceToTrackerBase(Transform3D t)These methods set the coexistence-to-tracker-base coordinate system transform. If head tracking is enabled, this transform is a calibration constant. If head tracking is not enabled, this transform is not used. This is used in bothSCREEN_VIEW
andHMD_VIEW
modes.public boolean getTrackingAvailable()This method returns a status flag indicating whether tracking is available.public void setSensor(int index, Sensor sensor) public Sensor getSensor(int index)The first method sets the sensor specified by the index to the sensor provided. The second method retrieves the specified sensor.public void setDominantHandIndex(int index) public int getDominantHandIndex()These methods set and retrieve the index of the dominant hand.public void setNonDominantHandIndex(int index) public int getNonDominantHandIndex()These methods set and retrieve the index of the nondominant hand.public void setHeadIndex(int index) public int getHeadIndex() public void setRightHandIndex(int index) public int getRightHandIndex() public void setLeftHandIndex(int index) public int getLeftHandIndex()These methods set and retrieve the index of the head, right hand, and left hand. Theindex
parameter refers to the sensor index.public int getCoexistenceCenterInPworldPolicy() public void setCoexistenceCenterInPworldPolicy(int policy)These methods set and retrieve the physical coexistence policy used in this physical environment. This policy specifies how Java 3D will place the user's eyepoint as a function of current head position during the calibration process. Java 3D permits one of three values:NOMINAL_HEAD
,NOMINAL_FEET
, orNOMINAL_SCREEN
.
C.10
Section 9.5, "Generating a View," describes how Java 3D generates a view for a standard flat-screen display with no head tracking. In this section, we describe how Java 3D generates a view in a room-mounted, head-tracked display environment-either a computer monitor with shutter glasses and head tracking or a multiple-wall display with head-tracked shutter glasses. Finally, we describe how Java 3D generates view matrices in a head-mounted and head-tracked display environment.Viewing in Head-Tracked Environments
C.10.1
When head tracking combines with a room-mounted display environment (for example, a standard flat-screen display), the ViewPlatform's origin and orientation serve as a base for constructing the view matrices. Additionally, Java 3D uses the end-user's head position and orientation to compute where an end-user's eyes are located in physical space. Each eye's position serves to offset the corresponding virtual eye's position relative to the ViewPlatform's origin. Each eye's position also serves to specify that eye's frustum since the eye's position relative to a Screen3D uniquely specifies that eye's view frustum. Note that Java 3D will access the PhysicalBody object to obtain information describing the user's interpupilary distance and tracking hardware, values it needs to compute the end-user's eye positions from the head position information.A Room-Mounted Display with Head Tracking
C.10.2
In a head-mounted environment, the ViewPlatform's origin and orientation also serves as a base for constructing view matrices. And, as in the head-tracked, room-mounted environment, Java 3D also uses the end-user's head position and orientation to modify the ViewPlatform's position and orientation further. In a head-tracked, head-mounted display environment, an end-user's eyes do not move relative to their respective display screens, rather, the display screens move relative to the virtual environment. A rotation of the head by an end user can radically affect the final view's orientation. In this situation, Java 3D combines the position and orientation from the ViewPlatform with the position and orientation from the head tracker to form the view matrix. The view frustum, however, does not change since the user's eyes do not move relative to their respective display screen, so Java 3D can compute the projection matrix once and cache the result.A Head-Mounted Display with Head Tracking
C.11
A camera-based view model allows application programmers to think about the images displayed on the computer screen as if a virtual camera took those images. Such a view model allows application programmers to position and orient a virtual camera within a virtual scene, to manipulate some parameters of the virtual camera's lens (specify its field of view), and to specify the locations of the near and far clipping planes.Compatibility Mode public void setCompatibilityModeEnable(boolean flag) public boolean getCompatabilityModeEnable()This flag turns compatibility mode on or off. Compatibility mode is disabled by default.
Note: Use of these view-compatibility functions will disable some of Java 3D's view model features and limit the portability of Java 3D programs. These methods are primarily intended to help jump-start porting of existing applications.
C.11.1
The traditional camera-based view model, shown in Figure C-7, places a virtual camera inside a geometrically specified world. The camera "captures" the view from its current location, orientation, and perspective. The visualization system then draws that view on the user's display device. The application controls the view by moving the virtual camera to a new location, by changing its orientation, by changing its field of view, or by controlling some other camera parameter.Overview of the Camera-Based View Model
Figure C-7 The Camera-Based View Model
C.11.2
The camera-based view model allows Java 3D to bridge the gap between existing 3D code and Java 3D's view model. By using the camera-based view model methods, a programmer retains the familiarity of the older view model but gains some of the flexibility afforded by Java 3D's new view model.Using the Camera-Based View Model
C.11.2.1
The Transform3D object provides the following method to create a viewing matrix:Creating a Viewing Matrix public void lookAt(Point3d eye, Point3d center, Vector3d up)This is a utility method that specifies the position and orientation of a viewing transform. It works similarly to the equivalent function in OpenGL. The inverse of this transform can be used to control the ViewPlatform object within the scene graph. Alternatively, this transform can be passed directly to the View'sVpcToEc
transform via the compatibility-mode viewing functions (see Section C.11.2.3, "Setting the Viewing Transform").
C.11.2.2
The Transform3D object provides the following three methods for creating a projection matrix. All three map points from eye coordinates (EC) to clipping coordinates (CC). Eye coordinates are defined such that (0, 0, 0) is at the eye and the projection plane is at z = -1.Creating a Projection Matrix public void frustum(double left, double right, double bottom, double top, double near, double far)Thefrustum
method establishes a perspective projection with the eye at the apex of a symmetric view frustum. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system (as are all other coordinate systems in Java 3D).The arguments define the frustum and its associated perspective projection:
(left
,bottom
,-near)
and(right
,top
,-near)
specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The-far
parameter specifies the far clipping plane. See Figure C-8.public void perspective(double fovx, double aspect, double zNear, double zFar)Theperspective
method establishes a perspective projection with the eye at the apex of a symmetric view frustum, centered about the Z-axis, with a fixed field of view. The resulting perspective projection transform mimics a standard camera-based view model. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system.The arguments define the frustum and its associated perspective projection:
-near
and-far
specify the near and far clipping planes;fovx
specifies the field of view in the X dimension, in radians; andaspect
specifies the aspect ratio of the window. See Figure C-9.
Figure C-8 A Perspective Viewing Frustum
Figure C-9 Perspective View Model Arguments
public void ortho(double left, double right, double bottom, double top, double near, double far)Theortho
method establishes a parallel projection. The orthographic projection transform mimics a standard camera-based video model. The transform maps points from eye coordinates to clipping coordinates. The clipping coordinates generated by the resulting transform are in a right-handed coordinate system.The arguments define a rectangular box used for projection:
(left
,bottom
,-near)
and(right
,top
,-near)
specify the point on the near clipping plane that maps onto the lower-left and upper-right corners of the window, respectively. The-far
parameter specifies the far clipping plane. See Figure C-10.
Figure C-10 Orthographic View Model
C.11.2.3
The View object provides the following compatibility-mode methods that operate on the viewing transform:Setting the Viewing Transform public void setVpcToEc(Transform3D vpcToEc) public void getVpcToEc(Transform3D vpcToEc)This compatibility-mode method specifies the ViewPlatform coordinates (VPC) to eye coordinates viewing transform. If compatibility mode is disabled, this transform is derived from other values and is read-only.
C.11.2.4
The View object provides the following compatibility-mode methods that operate on the projection transform:Setting the Projection Transform public void setLeftProjection(Transform3D projection) public void getLeftProjection(Transform3D projection) public void setRightProjection(Transform3D projection) public void getRightProjection(Transform3D projection)These compatibility-mode methods specify a viewing frustum for the left and right eye that transforms points in eye coordinates to clipping coordinates. If compatibility mode is disabled, aRestrictedAccessException
is thrown. In monoscopic mode, only the left-eye projection matrix is used.
The Java 3D API Specification |