• Pinhole Camera

    • The simplest camera model — all rays pass through a single point (the pinhole)
    • No depth of field — everything is in focus
    • Parameters
      • position — camera origin in world space
      • forward — direction the camera looks (normalized)
      • up — world up vector (used to compute right vector)
      • fov — vertical field of view (degrees or radians)
      • aspect_ratio — width / height
    • Building the camera basis
    • Viewport dimensions
      • viewport_height = 2 * tan(fov / 2) (at unit focal distance)
      • viewport_width = viewport_height * aspect_ratio
    • Generating a ray for pixel (i, j) in an (W, H) image

  • Thin Lens Camera (Depth of Field)

    • Real cameras have a lens with finite aperture
    • Objects at the focal distance are sharp; others are blurry (bokeh)
    • Parameters (in addition to pinhole)
      • aperture — lens diameter (larger = more blur)
      • focus_distance — distance to the focal plane
    • Algorithm
      • Sample a random point on the lens disk: lens_offset = aperture/2 * random_disk()
      • Compute the focus point: focus_point = position + focus_distance * ray_dir
      • New ray origin: origin = position + lens_offset.x * right + lens_offset.y * up
      • New ray direction: direction = normalize(focus_point - origin)

  • Field of View

    • Vertical FOV: angle from bottom to top of the image
    • Horizontal FOV: 2 * atan(tan(vfov/2) * aspect_ratio)
    • Common values: 60° (telephoto feel), 90° (wide), 120° (very wide)
    • In Godot: Camera3D.fov is vertical FOV in degrees
    • Relationship to focal length: fov = 2 * atan(sensor_height / (2 * focal_length))
      • 50mm lens on 35mm sensor: fov ≈ 39.6° (normal lens)
      • 24mm lens: fov ≈ 73.7° (wide angle)

  • Camera in Vulkan Ray Tracing

    • Pass camera data as a uniform buffer to the ray generation shader

  • Motion Blur

    • Shutter opens for a time interval [t0, t1]
    • Sample a random time t = lerp(t0, t1, random())
    • Evaluate object transforms at time t
    • Requires per-object velocity data or interpolated transforms
    • In path tracing: each ray sample uses a different time → natural motion blur