Chapter 4. Cameras and Lights

Chapter Objectives

After reading this chapter, you'll be able to do the following:

Chapters 4 through 8 focus on several different classes of nodes. Cameras and lights are discussed first because the objects you create are not visible without them. Then, in the following chapters, you learn more about other kinds of nodes in the scene database, including shapes, properties, bindings, text, textures, and NURBS curves and surfaces. Feel free to read selectively in this group of chapters, according to your interests and requirements.

Using Lights and Cameras

The previous chapters introduced you to group, property, and shape nodes and showed you how to create a scene graph using these nodes. Now you'll move on to two classes of nodes that affect how the 3D scene appears: lights and cameras. In Inventor, as in the real world, lights provide illumination so that you can view objects. If a scene graph does not contain any lights and you're using the default lighting model (Phong lighting), the objects are in darkness and cannot be seen. Just as the real world provides a variety of illumination types—light bulbs, the sun, theatrical spotlights—Inventor provides different classes of lights for you to use in your scene.

Cameras are our “eyes” for viewing the scene. Inventor provides a class of camera with a lens that functions just as the lens of a human eye does, and it also provides additional cameras that create a 2D “snapshot” of the scene with other kinds of lenses. This chapter discusses cameras first and assumes that the scene has at least one light at the top of the scene graph.

Tip: Viewer components create their own camera and light automatically. See Chapter 16 for more information on viewers.


A camera node generates a picture of everything after it in the scene graph. Typically, you put the camera near the top left of the scene graph, since it must precede the objects you want to view. A scene graph should contain only one active camera, and its position in space is affected by the current geometric transformation.

Tip: A switch node can be used to make one of several cameras active.


Camera nodes are derived from the abstract base class SoCamera
(see Figure 4-1).

Figure 4-1. Camera-Node Classes

SoCamera has the following fields:

viewportMapping (SoSFEnum) 

treatment when the camera's aspect ratio is different from the viewport's aspect ratio. (See “Mapping the Camera Aspect Ratio to the Viewport”.)

position (SoSFVec3f) 

location of the camera viewpoint. This location is modified by the current geometric transformation.

orientation (SoSFRotation)  

orientation of the camera's viewing direction. This field describes how the camera is rotated with respect to the default. The default camera looks from (0.0, 0.0, 1.0) toward the origin, and the up direction is (0.0, 1.0, 0.0). This field, along with the current geometric transformation, specifies the orientation of the camera in world space.

aspectRatio (SoSFFloat) 

ratio of the camera viewing width to height. The value must be greater than 0.0. A few of the predefined camera aspect ratios included in SoCamera.h are


nearDistance (SoSFFloat) 

distance from the camera viewpoint to the near clipping plane.

farDistance (SoSFFloat) 

distance from the camera viewpoint to the far clipping plane.

focalDistance (SoSFFloat) 

distance from the camera viewpoint to the point of focus (used by the examiner viewer).

Figure 4-2 and Figure 4-3, later in this chapter, show the relationship between the camera position, orientation, near and far clipping planes, and aspect ratio.

When a camera node is encountered during rendering traversal, Inventor performs the following steps:

  1. During a rendering action, the camera is positioned in the scene (based on its specified position and orientation, which are modified by the current transformation).

  2. The camera creates a view volume, based on the near and far clipping planes, the aspect ratio, and the height or height angle (depending on the camera type). A view volume, also referred to as a viewing frustum, is a six-sided volume that contains the geometry to be seen (refer to sections on each camera type, later in this chapter, for diagrams showing how the view volume is created). Objects outside of the view volume are clipped, or thrown away.

  3. The next step is to compress this 3D view volume into a 2D image, similar to the photographic snapshot a camera makes from a real-world scene. This 2D “projection” is now easily mapped to a 2D window on the screen. (See “Mapping the Camera Aspect Ratio to the Viewport”.)

  4. Next, the rest of the scene graph is rendered using the projection created by the camera.

You can also use the pointAt() method to replace the value in a camera's orientation field. This method sets the camera's orientation to point toward the specified target point. If possible, it keeps the up direction of the camera parallel to the positive y-axis. Otherwise, it makes the up direction of the camera parallel to the positive z-axis.

The syntax for the pointAt() method is as follows:

void		     pointAt(const  SbVec3f  &targetPoint)

Two additional methods for SoCamera are viewAll() and getViewVolume(). The viewAll() method is an easy way to set the camera to view an entire scene graph using the current orientation of the camera. You provide the root node of the scene to be viewed (which usually contains the camera) and a reference to the viewport region used by the render action. The slack parameter is used to position the near and far clipping planes. A slack value of 1.0 (the default) positions the planes for the “tightest fit” around the scene. The syntax for viewAll() is as follows:

void viewAll(SoNode *sceneRoot, const SbViewportRegion &vpRegion,	
	float slack = 1.0)

The viewAll() method modifies the camera position, nearDistance, and farDistance fields. It does not affect the camera orientation. An example showing the use of viewAll() appears in “Viewing a Scene with Different Cameras”.

The getViewVolume() method returns the camera's view volume and is usually used in relation to picking.

Subclasses of SoCamera

The SoCamera class contains two subclasses, as shown in Figure 4-1:

  • SoPerspectiveCamera

  • SoOrthographicCamera


A camera of class SoPerspectiveCamera emulates the human eye: objects farther away appear smaller in size. Perspective camera projections are natural in situations where you want to imitate how objects appear to a human observer.

An SoPerspectiveCamera node has one field in addition to those defined in SoCamera:

heightAngle (SoSFFloat) 

specifies the vertical angle in radians of the camera view volume.

The view volume formed by an SoPerspectiveCamera node is a truncated pyramid, as shown in Figure 4-2. The height angle and the aspect ratio determine the width angle as follows:

widthAngle = heightAngle * aspectRatio


In contrast to perspective cameras, cameras of class SoOrthographic-
produce parallel projections, with no distortions for distance. Orthographic cameras are useful for precise design work, where visual distortions would interfere with exact measurement.

An SoOrthographicCamera node has one field in addition to those defined in SoCamera:

height (SoSFFloat) specifies the height of the camera view volume.

The view volume formed by an SoOrthographicCamera node is a rectangular box, as shown in Figure 4-3. The height and aspect ratio determine the width of the rectangle:

width = height * aspectRatio

Figure 4-2. View Volume and Viewing Projection for an SoPerspectiveCamera Node

Figure 4-3. View Volume and Viewing Projection for an SoOrthographicCamera Node

Mapping the Camera Aspect Ratio to the Viewport

A viewport is the rectangular area where a scene is rendered. By default, the viewport has the same dimensions as the window (SoXtRenderArea). The viewport is specified when the SoGLRenderAction is constructed (see Chapter 9).

The viewportMapping field of SoCamera allows you to specify how to map the camera projection into the viewport when the aspect ratios of the camera and viewport differ. The first three choices crop the viewport to fit the camera projection. The advantage to these settings is that the camera aspect ratio remains unchanged. (The disadvantage is that there is dead space in the viewport.)

  • CROP_VIEWPORT_FILL_FRAME adjusts the viewport to fit the camera (see Figure 4-4). It draws the viewport with the appropriate aspect ratio and fills in the unused space with gray.

  • CROP_VIEWPORT_LINE_FRAME adjusts the viewport to fit the camera. It draws the border of the viewport as a line.

  • CROP_VIEWPORT_NO_FRAME adjusts the viewport to fit the camera. It does not indicate the viewport boundaries.

These two choices adjust the camera projection to fit the viewport:

  • ADJUST_CAMERA adjusts the camera to fit the viewport (see Figure 4-4). The projected image is not distorted. (The actual values stored in the aspectRatio and height/heightAngle fields are not changed. These values are temporarily overridden if required by the viewport mapping.) This is the default setting.

  • LEAVE_ALONE does not modify anything. The camera image is resized
    to fit the viewport. A distorted image is produced (see Figure 4-4).

Figure 4-4 shows the different types of viewport mapping. In this example, the camera aspect ratio is 3 to 1 and the viewport aspect ratio is 1.5 to 1. The top camera uses CROP_VIEWPORT_FILL_FRAME viewport mapping. The center camera uses ADJUST_CAMERA. The bottom camera uses LEAVE_ALONE. Figure 4-4 also shows three stages of mapping. At the left is the initial viewport mapping. The center column of drawings shows how the mapping changes if the viewport is compressed horizontally. The right-hand column shows how the mapping changes if the viewport is compressed vertically.

Viewing a Scene with Different Cameras

Example 4-1 shows a scene viewed by an orthographic camera and two perspective cameras in different positions. It uses a blinker node (described in Chapter 13) to switch among the three cameras. The scene (a park bench) is read from a file. Figure 4-5 shows the scene graph created by this example. Figure 4-6 shows the image created by this example.

Figure 4-4. Mapping the Camera Aspect Ratio to the Viewport

Figure 4-5. Scene Graph for Camera Example

Example 4-1. Switching among Multiple Cameras

#include <Inventor/SbLinear.h>
#include <Inventor/SoDB.h>
#include <Inventor/SoInput.h>
#include <Inventor/Xt/SoXt.h>
#include <Inventor/Xt/SoXtRenderArea.h>
#include <Inventor/nodes/SoBlinker.h>
#include <Inventor/nodes/SoDirectionalLight.h>
#include <Inventor/nodes/SoMaterial.h>
#include <Inventor/nodes/SoOrthographicCamera.h>
#include <Inventor/nodes/SoPerspectiveCamera.h>
#include <Inventor/nodes/SoSeparator.h>
#include <Inventor/nodes/SoTransform.h>

main(int, char **argv)
   // Initialize Inventor and Xt
   Widget myWindow = SoXt::init(argv[0]);
   if (myWindow == NULL) 

   SoSeparator *root = new SoSeparator;

Figure 4-6. Camera Example

   // Create a blinker node and put it in the scene. A blinker
   // switches between its children at timed intervals.
   SoBlinker *myBlinker = new SoBlinker;

   // Create three cameras. Their positions will be set later.
   // This is because the viewAll method depends on the size
   // of the render area, which has not been created yet.
   SoOrthographicCamera *orthoViewAll = new SoOrthographicCamera;
   SoPerspectiveCamera *perspViewAll = new SoPerspectiveCamera;
   SoPerspectiveCamera *perspOffCenter = new SoPerspectiveCamera;

   // Create a light
   root->addChild(new SoDirectionalLight);

   // Read the object from a file and add to the scene
   SoInput myInput;
   if (! myInput.openFile("parkbench.iv")) 
      return 1;
   SoSeparator *fileContents = SoDB::readAll(&myInput);
   if (fileContents == NULL) 
      return 1;

   SoMaterial *myMaterial = new SoMaterial;
   myMaterial->diffuseColor.setValue(0.8, 0.23, 0.03); 

   SoXtRenderArea *myRenderArea = new SoXtRenderArea(myWindow);

   // Establish camera positions. 
   // First do a viewAll() on all three cameras.  
   // Then modify the position of the off-center camera.
   SbViewportRegion myRegion(myRenderArea->getSize());
   orthoViewAll->viewAll(root, myRegion);
   perspViewAll->viewAll(root, myRegion);
   perspOffCenter->viewAll(root, myRegion);
   SbVec3f initialPos; 
   initialPos = perspOffCenter->position.getValue();
   float x, y, z;
   perspOffCenter->position.setValue(x+x/2., y+y/2., z+z/4.);



After you view this example, experiment by modifying the fields in each camera node to see how changes in camera position, orientation, aspect ratio, location of clipping planes, and camera height (or height angle) affect the images on your screen. Then try using the pointAt() method to modify the orientation of the camera node. Remember that a scene graph includes only one active camera at a time, and it must be placed before the objects to be viewed.


With the default lighting model (Phong), a scene graph also needs at least one light before you can view its objects. During a rendering action, traversing a light node in the scene graph turns that light on. The position of the light node in the scene graph determines two things:

  • What the light illuminates—a light illuminates everything that follows it in the scene graph. (The light is part of the traversal state, described in Chapter 3. Use an SoSeparator node to isolate the effects of a particular light from the rest of the scene graph.)

  • Where the light is located in 3D space—certain light-source nodes (for example, SoPointLight) have a location field. This light location is affected by the current geometric transformation. Other light-source nodes have a specified direction (for example, SoDirectionalLight), which is also affected by the current geometric transformation.

Another important fact about all light-source nodes is that lights accumulate. Each time you add a light to the scene graph, the scene appears brighter. The maximum number of active lights is dependent on the OpenGL implementation.

In some cases, you may want to separate the position of the light in the scene graph from what it illuminates. Example 4-2 uses the SoTransformSeparator node to move only the position of the light. Sensors and engines are also a useful way to affect a light's behavior. For example, you can attach a sensor to a sphere object; when the sphere position changes, the sensor can change the light position as well. Or, you can use an engine that finds the path to a given object to affect the location of the light that illuminates that object (see SoComputeBoundingBox in the Open Inventor C++ Reference Manual).


All lights are derived from the abstract base class SoLight. This class adds no new methods to SoNode. Its fields are as follows:

on (SoSFBool) 

whether the light is on.

intensity (SoSFFloat) 

brightness of the light. Values range from 0.0 (no illumination) to 1.0 (maximum illumination).

color (SoSFColor) 

color of the light.

Subclasses of SoLight

The SoLight class contains three subclasses, as shown in Figure 4-7:

  • SoPointLight

  • SoDirectionalLight

  • SoSpotLight

    Figure 4-7. Light-Node Classes

Figure 4-8 shows the effects of each of these light types. The left side of the figure shows the direction of the light rays, and the right side shows the same scene rendered with each light type. Figure In-2, Figure In-3 and Figure In-4 show additional use of these light types.

Tip: Directional lights are typically faster than point lights for rendering. Both are typically faster than spotlights. To increase rendering speed, use fewer and simpler lights.


A light of class SoPointLight, like a star, radiates light equally in all directions from a given location in 3D space. An SoPointLight node has one additional field:

location (SoSFVec3f) 

3D location of a point light source. (This location is affected by the current geometric transformation.)


A light of class SoDirectionalLight illuminates uniformly along a particular direction. Since it is infinitely far away, it has no location in 3D space. An SoDirectionalLight node has one additional field:

direction (SoSFVec3f) 

specifies the direction of the rays from a directional light source. (This direction is affected by the current geometric transformation.)

Figure 4-8. Light Types

Tip: A surface composed of a single polygon (such as a large rectangle) with one normal at each corner will not show the effects of a point light source, since lighting is computed (by OpenGL) only at vertices. Use a more complex surface to show this effect.

With an SoDirectionalLight source node, all rays of incident light are parallel. They are reflected equally from all points on a flat polygon, resulting in flat lighting of equal intensity, as shown in Figure 4-8. In contrast, the intensity of light from an SoPointLight source on a flat surface would vary, because the angle between the surface normal and the incident ray of light is different at different points of the surface.


A light of class SoSpotLight illuminates from a point in space along a primary direction. Like a theatrical spotlight, its illumination is a cone of light diverging from the light's position. An SoSpotLight node has four additional fields (see Figure 4-9):

location (SoSFVec3f) 

3D location of a spotlight source. (This location is affected by the current geometric transformation.)

direction (SoSFVec3f) 

primary direction of the illumination.

dropOffRate (SoSFFloat) 

rate at which the light intensity drops off from the primary direction (0.0 = constant intensity,
1.0 = sharpest drop-off).

cutOffAngle (SoSFFloat) 

angle, in radians, outside of which the light intensity is 0.0. This angle is measured from one edge of the cone to the other.

Using Multiple Lights

You can now experiment by adding different lights to a scene. Example 4-2 contains two light sources: a stationary red directional light and a green point light that is moved back and forth by an SoShuttle node (see Chapter 13). Figure 4-10 shows the scene graph created by this example.

Figure 4-9. Fields for SoSpotLight Node

Example 4-2. Using Different Types of Lights

#include <Inventor/SoDB.h>
#include <Inventor/Xt/SoXt.h>
#include <Inventor/Xt/viewers/SoXtExaminerViewer.h>
#include <Inventor/nodes/SoCone.h>
#include <Inventor/nodes/SoDirectionalLight.h>
#include <Inventor/nodes/SoMaterial.h>
#include <Inventor/nodes/SoPointLight.h>
#include <Inventor/nodes/SoSeparator.h>
#include <Inventor/nodes/SoShuttle.h>
#include <Inventor/nodes/SoTransformSeparator.h>

main(int , char **argv)
   // Initialize Inventor and Xt
   Widget myWindow = SoXt::init(argv[0]);
   if (myWindow == NULL) 

   SoSeparator *root = new SoSeparator;

   // Add a directional light
   SoDirectionalLight *myDirLight = new SoDirectionalLight;
   myDirLight->direction.setValue(0, -1, -1);
   myDirLight->color.setValue(1, 0, 0);

   // Put the shuttle and the light below a transform separator.
   // A transform separator pushes and pops the transformation 
   // just like a separator node, but other aspects of the state 
   // are not pushed and popped. So the shuttle's translation 
   // will affect only the light. But the light will shine on 
   // the rest of the scene.
   SoTransformSeparator *myTransformSeparator =
       new SoTransformSeparator;

   // A shuttle node translates back and forth between the two
   // fields translation0 and translation1.  
   // This moves the light.
   SoShuttle *myShuttle = new SoShuttle;
   myShuttle->translation0.setValue(-2, -1, 3);
   myShuttle->translation1.setValue( 1,  2, -3);

   // Add the point light below the transformSeparator
   SoPointLight *myPointLight = new SoPointLight;
   myPointLight->color.setValue(0, 1, 0);

Figure 4-10. Scene Graph for Light Example

    root->addChild(new SoCone);

   SoXtExaminerViewer *myViewer = 
            new SoXtExaminerViewer(myWindow);